The Index Status page (Google Index > Index Status) provides stats about the URLs Google was able to index for the selected site for the past year. More information about how Google crawls and indexes the web.
What's in the Index Status report
- Only data for your specific site
We do not show aggregate data for all versions of your site. While Google crawls and indexes content from your site regardless of whether you have verified the site in Search Console, the number of indexed URLs reported in Index status are specific to those associated with your site version.
For example, suppose you have a site with 10 URLs that people can view without signing in, and 100 URLs that people can only see once they sign into your site. If you have added only one version of your site to Search Console (e.g.
http://www.example.com), you would see Index status totals only for the non-secure portion of your site, which would be a much lower number than for all URLs on your site.
Therefore, in order to see the index count for your secure site, you will need to add it to Search Console (e.g.
https://www.example.com) and then select it from the Site Selector.
Similarly, you can verify a subdirectory of your site with Search Console, and only data for that subdirectory will be shown in its Index status (
www.example.com/blog/). However, the top-level domain will continue to reflect the total count of URLs indexed for that domain.
For this reason, we strongly encourage you to verify all relevant versions of your site, and to define a preferred domain if users can access your site equally through www or non-www URLs.
- Total URLs in the Google index
Shows the total URLs available to appear in search results, along with other URLs Google might discover by other means. This number changes over time as you add and remove pages. The number of indexed URLs is almost always significantly smaller than the number of crawled URLs, because Total indexed excludes URLs identified as duplicates or non-canonical, as less useful, or that contain a meta noindex tag.
- URLs blocked by robots
The total number of URLs disallowed from crawling by your robots.txt file. If your site has many pages, you might want to view the Blocked by robots data with the Total indexed option unchecked. This allows for a more readable range of values on the graph.
- URLs removed
The number of URLs you have removed with the URL removal tool. Again, this value should be quite low in comparison to the other URLs in this report, so it's easier to view this selection by itself rather than in comparison with other URLs.
How to use the Index Status report
- Look for a steady rise in the graph. A steady increase in the number of crawled and indexed pages indicates that Google can regularly access your content, and that your site is being indexed.
- Check into sudden drops. If you see a sudden drop in the number of indexed pages, it might mean that your server is down or overloaded, or that Google is having trouble accessing your content. (Note: As of 3/9/2014, Index Status reflects data only for the specific URL structure of the selected site. See above for more details.).
- Make note of unusually high index volume for your site. A high number of URLs could mean that your site has problems with canonicalization, duplicate content, or automatically generated pages, or that it has been hacked. In many cases, Google will send you a message when we detect problems with your site, so make sure to set your notification preferences.
- Review sudden changes. Spikes or dips that appear in several charts can indicate problems with site configuration, redirects, or security.
Your site's index status and its Google Search results
Sometimes the data we show in Index Status is not fully reflected in Google Search results. In some cases, Google may apply filters while building search results, and these filters can affect which results are shown in search. Filters include removal of pages due to legal reasons or by webmaster request, results from websites that we think are currently unavailable ("down"), and results removed due to manual spam action.
Since these filters are usually applied due to temporary urgent issues, or are requested by mistake, Google may in some cases retain the pages in our index for a period of time to help websites recover quickly after the issue is fixed (for example, after the site becomes available again).