© Google Search Console
With the introduction of the Search Console URL Inspection API (interface), Google is (finally) responding to the countless calls for an alternative data extraction from properties (websites/projects) available in Search Console. Now webmasters, SEOs and programmers have the possibility to obtain the data externally, for example to evaluate them separately or to read them into external programs for further data analysis.
In order to clarify the benefits of the new API, es is helpful to take a look at the situation that applied until now: Until now, webmasters were forced to call the existing data directly in the Google Search Console. es On the one hand, this cost time, and on the other hand, it was hardly or only with difficulty possible to process the existing data records elsewhere or to use them for consistent monitoring.
With the introduction of the API, the data can now be fed elsewhere, including to external programs or even programs written in-house. The new interface delivers, for example, the indexing status of individual URLs, information on rich results, AMP and canonicals.
However, webmasters are still not completely free in their queries. At the moment, queries are limited to 2,000 per day. In addition, no more than 600 queries per minute are allowed. Large websites and projects in particular must therefore prioritize their queries and consider in advance what they actually want to query. Es currently offers a workaround: If several properties have been created for a page in Search Console, the limit counts per property, not per website. There can be up to 1,000 properties in Search Console.
However, many application possibilities arise from the new interface, for example:
In addition to the aforementioned usage options and delivered data, the new Search Console URL Inspection API can do even more. Es can, for example, find out when the website was last accessed and updated/indexed by a Google crawler bot. There is also a "robotsTextState" field at es , which can be used to check in real time whether and which individual pages are excluded from crawler inspection. Although this can currently be done manually, the new API can be automated to save time.
The new interface is also helpful in error analysis, which Google calls "pageFetchState" in technical jargon. At this point, Google delivers different codes for different actual states of the page. For example, soft and hard 404 errors can be found more quickly, as can 403 errors or, for example, pages with (involuntarily) restricted user rights (403). the "pageFetchState" column covers the entire spectrum of typical error sources, both in terms of accessibility and internal linking, and with regard to different server errors.
Another interesting function for many webmasters should be the column "referringURLs". This can be used to check from which sources individual pages are linked. This can be helpful for building a well thought-out internal link structure as well as for checking individual backlinks.
The interface should be particularly helpful for a planned page migration. From now on, it will be possible to check more quickly and, in some cases, automatically whether the migration of the website has actually been successful in practice. Likewise, as explained in the previous paragraph, typical redirection errors during a move can be identified more quickly.
Author: f.baer
Image: © Google Search Console