See which pages Google has found on your site, which pages have been indexed, and any indexing problems encountered.
If you are new to indexing or SEO, or have a small site, here's how to get started:
- Decide whether you need to use this report. If your site has fewer than 500 pages, you probably don't need to use this report. Instead, use one of the following Google searches to see if your site is indexed. If there are no search results for your site, then use the Index Coverage report to verify whether your site truly has zero indexed pages. If the Index Coverage report says that you really have zero valid pages (or zero pages of any status), see the troubleshooting section.
- See a sample of pages from your site that Google knows about:
- Search for indexed pages containing specific terms on your site:
site:<<your_site>> term1 term2
site:example.com/petstore iguanas zebras.
- Search for the exact URL of a page on your site to see whether Google has indexed it:
- See a sample of pages from your site that Google knows about:
- Read this short user guide for a quick, easy explanation of this report.
- If you want to delve deeper:
- Read how Google Search works. If you don't understand indexing, this report will confuse or frustrate you–trust us.
- Remember: the Index Coverage report is used to understand the general index status of your site. This report is not useful for investigating the index status of specific pages. To find the index status of a specific page, use the URL Inspection tool.
- What to look for in this report:
- Are all of your important URLs green (valid)? Most sites have at least a few unindexed pages, but all of your important pages should be indexed. Remember that duplicate URLs shouldn't be indexed. Check the index status of your homepage and key pages using the URL Inspection tool. Note that the list of example URLs in the report is limited to 1,000 items, and isn't guaranteed to show all URLs in a given status, even when less than 1,000 items.
- Are the gray (not indexed) URL reasons what you expect? It's fine for a URL not to be indexed for the right reasons—for example, a robots.txt rule on your site, a noindex tag on the page, a duplicate URL, or a 404 for a page that you've removed and have no replacement for.
- If the total URL count in this report is much smaller than your site's page count, then Google isn't finding pages on your site. Some possible reasons for this:
- The pages, or your site, is new. It can take a week or so for Google to start crawling and indexing a new page or site. If your page or site is new, wait a few days for Google to find and crawl it. In an urgent situation, or if waiting doesn't seem to be working, you can explicitly ask Google to crawl individual pages.
- The pages aren't findable by Google. Google needs a way to find a page in order to crawl it. This means that it must be linked from a known page, or from a sitemap. For a new website, the best first step is to request indexing of your homepage, which should start Google crawling your website. For missing parts of a site, make sure they are linked properly. If you are using a site hosting service such as Wix or SquareSpace, they will probably tell Google about any new pages, once you publish them: check your site host's documentation to learn how to publish your pages and make them findable by search engines.
- Read the documentation for your specific indexing status reason to understand and, if necessary, fix the issue. Skipping the documentation will cause you more more effort and time in the long run than reading the docs.
- What not to look for:
- Don't expect every URL on your site to be indexed. Some URLs might be duplicates or might not contain meaningful information. Just be sure that the key pages on your site are indexed.
- Non-indexed URLs can be fine. Read and understand the specific reason for each excluded URL to confirm that the page is properly excluded.
- Don't expect totals here to match exactly your estimate of the number of URLs on your site. The total coverage numbers above the chart are complete and accurate from Google's perspective, but small discrepancies can occur for various reasons.
- Just because a page is indexed doesn't guarantee that it will show up in your search results. Search results are customized for each user's search history, location, and many other variables, so even if a page is indexed, it won't show up in every search, or in the same ranking when it does. Therefore, if Search Console says a URL is indexed, but it doesn't turn up in your search results, you can assume that it is indexed and eligible to appear in search results.
What does this report show?
The Index coverage report shows whether specific URLs have been crawled and indexed by Google. (If you don't know have a good knowledge of what these terms mean, please read how Google Search works). Google finds URLs in many ways, and tries to crawl most of them. If a URL is missing or unavailable, Google will probably continue to try crawling that URL for a while.
A URL in this report can have one of the following statuses:
- Indexed: Google found and indexed the page. Nothing else to do.
- Not indexed: The URL is not indexed, either because of an indexing error, or because of a legitimate reason (the page is blocked or a duplicate). Read the documentation to determine whether it is something that you should fix.
What is indexing?
How do I get my page or site indexed?
If you are using a site hosting service such as Wix or SquareSpace, your hosting service will probably tell Googe whenever you publish or update a page. Check your site host's documentation to learn how to publish your pages and make them findable by search engines.
If you are creating a site or page without a hosting service, you can use a sitemap or various other methods to tell Google about new sites or pages.
We strongly recommend ensuring that your homepage is indexed. Starting from your homepage, Google should be able to index all the other pages on your site, if your site has comprehensive and properly implemented site navigation for visitors.
Is it OK if a page isn't indexed?
SEOs, developers, and experienced website owners
- Read how Google Search works. If you don't understand indexing, this report will just be confusing or frustrating: trust us.
- Follow the guidelines in Navigating the report, including What to look for and What not to look for.
- Read the troubleshooting section to understand and fix common problems.
- Remember that Not indexed is not necessarily a bad status for a URL. Examine the reason given for not indexing a given URL.
- Read the documentation for your specific status and reason to understand the issue, and see tips for fixing it.
The Index Coverage report shows the Google indexing status of all URLs that Google knows about in your property.
The top-level summary page shows a set of reasons why URLs weren't indexed, and a chart showing your indexed and non-indexed URLs over time.
Why pages aren’t indexed table
These are issues that prevented URLs from being indexed on your site. Click a row to see a details page that shows URLs affected by this issue and your site's history with this issue.
Improve page experience table
These are issues that didn't prevent page indexing, but we recommend that you fix them to improve Google's ability to understand your pages. Click a row to see a details page that focuses on all URLs with the same status/reason.
View data about indexed pages link
This link shows historical information about your indexed page count, as well as an example list of up to 1,000 URLs that are indexed.
The top level page in the report shows a graph and count of your indexed and non-indexed (but found) pages, as well as tables showing reasons that URLs couldn't be indexed, or other indexing improvements.
Ideally you should see a gradually increasing count of valid indexed pages as your site grows. If you see drops or spikes, see the troubleshooting section. The status table in the summary page is grouped and sorted by "status + reason".
Your goal is to get the canonical version of every important page indexed. Any duplicate or alternate pages shouldn't be indexed. Duplicate or alternate pages have substantially the same content as the canonical page. Having a page marked duplicate or alternate is usually a good thing; it means that we've found the canonical page and indexed it. You can find the canonical for any URL by running the URL Inspection tool.
- 100% coverage: You should not expect all URLs on your site to be indexed, only the canonical pages, as described above.
- Immediate indexing: When you add new content, it can take a few days for Google to index it. You can reduce the indexing lag by requesting indexing.
The Primary crawler value on the summary page shows the default user agent type that Google uses to crawl your site. Available values are: Smartphone or Desktop; these crawlers simulate a visitor using a mobile device or a desktop computer, respectively.
Google crawls all pages on your site using this primary crawler type. Google may additionally crawl a subset of your pages using a secondary crawler (sometimes called alternate crawler), which is the other user agent type. For example, if the primary crawler for your site is Smartphone, the secondary crawler is Desktop; if the primary crawler is Desktop, your secondary crawler is Smartphone. The purpose of a secondary crawl is to try to get more information about how your site behaves when visited by users on another device type.
A URL can have one of the following statuses:
- Not indexed: The URLs is not indexed, which can either be due to an error that you should fix, or it might be the right thing for that URL. You can see a list of reasons why your URLs weren't indexed in the Why pages aren’t indexed table.
- Valid: See valid and indexed pages by clicking View data about indexed pages below the chart on the summary page for the report.
See Reason descriptions below for a description of each status type, and how to handle it.
The Source value in the table shows whether the source of the issue is Google or the website. In general, you can only fix issues that have the source listed as "Website".
The validation status for this issue. You should prioritize fixing issues that are in validation state "failed" or "not started".
After you fix all instances of a specific issue on your site, you can ask Google to confirm your fixes. If all known instances are fixed, the issue count goes to zero in the issues table and dropped to the bottom of the table.
Telling Google that you have fixed all issues in a specific issue status or category has the following benefits:
- You'll get an email when Google has confirmed your fix on all URLs, or conversely, if Google has found remaining instances of that issue.
- You can track Google's progress in confirming your fixes, and see a log of all pages queued for checking, and the fix status of each URL.
It might not always make sense to fix and validate a specific issue on your website: for example, URLs blocked by robots.txt are probably intentionally blocked. Use your judgment when deciding whether to address a given issue.
You can also fix issues without validating; Google updates your instance count whenever it crawls a page with known issues, whether or not you explicitly requested fix validation.
To tell Search Console that you fixed an issue:
- Fix all instances of the issue on your site. If you missed a fix, validation will stop when Google finds a single remaining instance of that issue.
- Open the issue details page of the issue that you fixed. Click the issue in the issues list in your report.
- Click Validate fix. Do not click Validate fix again until validation has succeeded or failed. More details about how Google checks your fixes.
- You can monitor the validation progress. Validation typically takes up to about two weeks, but in some cases can take much longer, so please be patient. You will receive a notification when validation succeeds or fails.
- If validation fails, you can see which URL caused the validation to fail by clicking See details in the issue details page. Fix this page, confirm your fix on all URLs in Pending state, and restart validation.
When is an issue considered "fixed" for a URL or item?
An issue is marked as fixed for a URL or item when either of the following conditions are met:
- When the URL is crawled and the issue is no longer found on the page. For an AMP tag error, this can mean that you either fixed the tag or that the tag has been removed (if the tag is not required). During a validation attempt, it will be labeled Passed.
- If the page is not available to Google for any reason (page removed, marked noindex, requires authentication, and so on), the issue will be considered as fixed for that URL. During a validation attempt, it is categorized in the Other validation state.
An issue's lifetime extends from the first time any instance of that issue was detected on your site until 90 days after the last instance was marked as gone from your site. If ninety days pass without any recurrences, the issue is removed from the issues table.
An issue's First detected date is the first time the issue was detected during the issue's lifetime, and does not change. Therefore:
- If all instances of an issue are fixed, but a new instance of the issue occurs 15 days later, the issue is marked as open, and first detected date remains the original date.
- If the same issue occurs 91 days after the last instance was fixed, the previous issue was closed, and so this is recorded as a new issue, with the first detected date set to the new detection date.
Here is an overview of the validation process after you click Validate Fix for an issue. This process can take several days or even longer, and you will receive progress notifications by email.
- When you click Validate Fix, Search Console immediately checks a few pages.
- If the current instance exists in any of these pages, validation ends, and the validation state remains unchanged.
- If the sample pages do not have the current error, validation continues with state Started. If validation finds other unrelated issues, these issues are counted against that other issue type and validation continues.
- Search Console works through the list of known URLs affected by this issue. Only URLs with known instances of this issue are queued for recrawling, not the whole site. Search Console keeps a record of all URLs checked in the validation history, which can be reached from the issue details page.
- When a URL is checked:
- If the issue is not found, the instance validation state changes to Passing. If this is the first instance checked after validation has started, the issue validation state changes to Looking good.
- If the URL is no longer reachable, the instance validation state changes to Other (which is not an error state).
- If the instance is still present, issue state changes to Failed and validation ends. If this is a new page discovered by normal crawling, it is considered another instance of this existing issue.
- When queued URLs have been checked for this issue and found to be fixed of this issue, the issue state changes to Passed. However, even when all instances have been fixed, the severity label of the issue doesn't change (Error or Warning), only the number of affected items (0).
Even if you never click Start validation Google can detect fixed instances of an issue. If Google detects that all instances of an issue have been fixed during its regular crawl, it will change the issue count to 0 on the report.
⚠️ Wait for a validation cycle to complete before requesting another cycle, even if you have fixed some issues during the current cycle.
To restart a failed validation:
- Navigate into the validation log for the failed validation: Open to the issue details page of the issue that failed validation and click See details.
- Click Start new validation.
- Validation will restart for all URLs marked Pending or Failed, plus any new instances of this issue discovered through normal crawling since the last validation attempt. URLs marked Passed or Other are not rechecked.
- Validation typically takes up to about two weeks, but in some cases can take much longer, so please be patient.
To see the progress of a current validation request, or the history of the last request if a validation is not in progress:
- Open the issue details page for the issue. Click the issue row in the main report page to open the issue details page.
- The validation request status is shown both in the issue details page and also in the Validation row of the Details table.
- Click See details to open the validation details page for that request.
- The instance status for each URL included in the request is shown in the table.
- The instance status applies to the specific issue that you are examining. You can have one issue labeled Passed on a page, but other issues labeled Failed, Pending, or Other on the same page.
- In the AMP report and Index Coverage report, entries in the validation history page are grouped by URL.
- In the Mobile Usability and Rich Result reports, items are grouped by the combination of URL + structured data item (as determined by the item's Name value).
The following validation states apply to validation for a given issue:
- Not started: One or more instances of this issue have never been in a validation request for this issue.
- Click into the issue to learn the details of the error. Inspect the individual pages to see examples of the error on the live page using the AMP Test. (If the AMP Test does not show the error on the page, it is because you fixed the error on the live page after Google found the error and generated this issue report.)
- Click Learn more on the details page to see the details of the problem.
- Click an example URL row in the table to get details on that specific error.
- Fix your pages and then click Validate fix to start validation. Validation typically takes up to about two weeks, but in some cases can take much longer, so please be patient.
- Started: You have begun a validation attempt and no remaining instances of the issue have been found yet.
Next step: Google will send notifications as validation proceeds, telling you what to do, if necessary.
- Looking good: You started a validation attempt, and all issue instances that have been checked so far have been fixed.
Next step: Nothing to do, but Google will send notifications as validation proceeds, telling you what to do.
- Passed: All known instances of the issue are gone (or the affected URL is no longer available). You must have clicked Validate fix to get to this state (if instances disappeared without you requesting validation, state would change to N/A).
Next step: Nothing more to do.
- N/A: Google found that the issue was fixed on all URLs, even though you never started a validation attempt.
Next step: Nothing more to do.
- Failed: A certain threshold of pages still contain this issue, after you clicked Validate.
Next steps: Fix the issue and restart validation.
After validation has been requested, every instance of the issue is assigned one of the following validation states:
- Pending: Queued for validation. The last time Google looked, this issue instance existed.
- Passed: [Not available in all reports] Google checked for the issue instance and it no longer exists. Can reach this state only if you explicitly clicked Validate for this issue instance.
- Failed: Google checked for the issue instance and it's still there. Can reach this state only if you explicitly clicked Validate for this issue instance.
- Other: [Not available in all reports] Google couldn't reach the URL hosting the instance, or (for structured data) couldn't find the item on the page any more. Considered equivalent to Passed.
Note that the same URL can have different states for different issues; For example, if a single page has both issue X and issue Y, issue X can be in validation state Passed and issue Y on the same page can be in validation state Pending.
You can use the dropdown filter above the chart to filter index results by how Google discovered the URL. The following values are available:
- All known pages [Default] - Show all URLs discovered by Google through any means.
- All submitted pages - Show only pages submitted in a sitemap using either the Sitemaps report or a robots.txt file on your site.
- Unsubmitted pages only - Show only pages that were not submitted in a sitemap.
- Specific sitemap URL - Show only URLs listed in a specific sitemap that was submitted using this report. This includes any URLs in nested sitemaps.
A URL is considered to submitted by a sitemap even if it was also discovered through some other mechanism (for example, by organic crawling from another page).
Click on a row in the summary page to open a details page for that status + reason combination. You can see details about the chosen issue by clicking Learn more at the top of the page.
The graph on this page shows the count of affected pages over time.
The Examples table shows an example list of pages affected by this status + reason. The list does not necessarily show all URLs with that issue, and is limited to 1,000 rows. Each example row has the following functionality:
- Click the row to see more details about that URL.
- opens the URL in a new tab.
- opens URL Inspection for that URL.
- copies the URL
When you've fixed all instances of an error or warning, click Validate Fix to let Google know that you've fixed the issue.
See a URL marked with an issue that you've already fixed? Perhaps you fixed the issue AFTER the last Google crawl. Therefore, if you see a URL with an issue that you have fixed, be sure to check the crawl date for that URL. Check and confirm your fix, then request re-indexing
Sharing the report
You can share issue details in the coverage or enhancement reports by clicking the Share button on the page. This link grants access only to the current issue details page, plus any validation history pages for this issue, to anyone with the link. It does not grant access to other pages for your resource, or enable the shared user to perform any actions on your property or account. You can revoke the link at any time by disabling sharing for this page.
Exporting report data
Many reports provide an export button to export the report data. Both chart and table data are exported. Values shown as either ~ or - in the report (not available/not a number) will be zeros in the downloaded data.
The table is sorted by what we think are the most important issues, based on severity of the issue and number of pages affected. To investigate a specific reason in the indexing errors table:
- Click a row in the Why pages aren't indexed table. Decide whether there is a problem based on the status reason and your indexing goal.
- Read the specific information about the issue.
- Inspect an example URL affected by the issue:
- Click the inspect iconnext to the URL in the examples table to open URL Inspection for that URL.
- See crawl and index details for that URL in the Coverage > Crawl and Coverage > Indexing sections of the URL Inspection report.
- To test the live version of the page, click Test live URL.
Common indexing issues
Here are some of the most common indexing issues that you might see in this report:
Drop in total indexed pages without corresponding errors
More Excluded than Valid pages
If you see more Excluded than Valid pages, look at the exclusion reasons. Common exclusion reasons include:
- You have a robots.txt rule that blocks Google from crawling large sections of your site. If you are blocking the wrong pages, unblock them.
- Your site has a large number of duplicate pages, probably because it uses parameters to filter or sort a common collection (for example:
sort=price). These page probably should be excluded, if they are just showing the same content that is sorted, filtered, or reached in different ways.
Error spikes might be caused by a change in your template that introduces a new error, or you might have submitted a sitemap that includes URLs that are blocked for crawling by robots.txt, noindex, or a login requirement.
If you see an error spike:
- See if you can find any correspondence between the total number of indexing errors or total indexed count and the sparkline next to a specific error row on the summary page as a clue to which issue might be affecting your total error or total indexed page count.
- Click into the details pages for any errors that seem to be contributing to your error spike. Read the description about the specific error type to learn how to handle it best.
- Click into an issue, and inspect an example page to see what the error is, if necessary.
- Fix all instances for the error, and request validation by clicking Validate Fix in the details page for that reason. Read more about validation.
- You'll get notifications as your validation proceeds, but you can check back after a few days to see whether your error count has gone down.
Testing server connectivity
Fixing server connectivity errors
- Reduce excessive page loading for dynamic page requests.
A site that delivers the same content for multiple URLs is considered to deliver content dynamically (for example,
www.example.com/shoes.php?color=red&size=7serves the same content as
www.example.com/shoes.php?size=7&color=red). Dynamic pages can take too long to respond, resulting in timeout issues. Or the server might return an overloaded status to ask Googlebot to crawl the site more slowly. In general, we recommend keeping parameter lists short and using them sparingly. If you're confident about how parameters work for your site, you can tell Google how we should handle these parameters.
- Make sure your site's hosting server is not down, overloaded, or misconfigured.
If connection, timeout or response problems persists, check with your web hoster and consider increasing your site's ability to handle traffic.
- Check that you are not inadvertently blocking Google.
You might be blocking Google due to a system level issue, such as a DNS configuration issue, a misconfigured firewall or DoS protection system, or a content management system configuration. Protection systems are an important part of good hosting and are often configured to automatically block unusually high levels of server requests. However, because Googlebot often makes more requests than a human user, it can trigger these protection systems, causing them to block Googlebot and prevent it from crawling your website. To fix such issues, identify which part of your website's infrastructure is blocking Googlebot and remove the block. The firewall may not be under your control, so you may need to discuss this with your hosting provider.
- Control search engine site crawling and indexing wisely.
Some webmasters intentionally prevent Googlebot from reaching their websites, perhaps using a firewall as described above. In these cases, usually the intent is not to entirely block Googlebot, but to control how the site is crawled and indexed. If this applies to you, check the following:
- To control Googlebot's crawling of your content, use a robots.txt file and configure URL parameters.
- If you're worried about rogue bots using the Googlebot user-agent, you can verify whether a crawler is actually Googlebot.
- If you would like to change how frequently Googlebot crawls your site, you can request a change in Googlebot's crawl rate. Hosting providers can verify ownership of their IP addresses to enable this.
In general, we recommend fixing only 404 error pages, not 404 excluded pages. 404 error pages are pages that you explicitly asked Google to index, but were not found, which is obviously a bug. 404 excluded pages are pages that Google discovered through some other mechanism, such as a link from another page. If the page has been moved, you should return a 3XX redirect to the new page. Learn more about evaluating and fixing 404 errors.
If your page is not in the report at all, one of the following is probably true:
- Google doesn't know about the page. Some notes about page discoverability:
- If this is a new site or page, remember that it can take some time for Google to find and crawl new sites or pages.
- In order for Google to learn about a page, you must either submit a sitemap or page crawl request, or else Google must find a link to your page somewhere.
- After a page URL is known, it can take some time (up to a few weeks) before Google crawls some or all of your site.
- Indexing is never instant, even when you submit a crawl request directly.
- Google doesn't guarantee that all pages everywhere will make it into the Google index.
- Google can't reach your page (it requires a login, or is otherwise not available to all users on the internet)
- The page has a noindex tag, which prevents Google from indexing it
- The page was dropped from the index for some reason.
Use the URL Inspection tool to test the problem on your page. If the page is not in the Index Coverage report but it is listed as indexed in the URL Inspection report, it was probably indexed recently, and will appear in the Index Coverage report soon. If the page is listed as not indexed in the URL Inspection tool (which is what you'd expect), test the live page. The live page test results should indicate what the issue is: use the information from the test and the test documentation to learn how to fix the issue.
- Fix the issue that prevents the page from being crawled
- Remove the URL from your sitemap and resubmit the sitemap in the Sitemaps report (for fastest service)
- Using the Sitemaps report, delete any sitemaps that contain the URL (and ensure that no sitemaps listed in your robots.txt file include this URL).
Why is my page in the index? I don't want it indexed.
Google can index any URL that it finds unless you include a noindex directive on the page (or it has been temporarily blocked), and Google can find a page in many different ways, including someone linking to your page from another site.
- If you want your page to be blocked from Google Search results, you can either require some kind of login for the page, or you can use a noindex directive on the page. Using a robots.txt rule is not recommended for blocking a page, and will actually prevent noindex from being seen by Google.
- If you want your page to be removed from Google Search results after it has already been found, you'll need to follow these steps.
Why hasn't my site been reindexed lately?
Google reindexes pages based on a number of criteria, including how often it thinks the page changes. If your site doesn't change often, it might be on a slower refresh rate, which is fine, if your pages haven't changed. If you think your site is in need of a refresh, ask Google to recrawl it.
Can you please recrawl my page/site?
Why are so many of my pages excluded?
Look at the exclusion reasons detailed by the Index Coverage report. Most exclusions are due to one of the following reasons:
- You have a robots.txt rule that is blocking us from crawling large sections of your site. Use the URL Inspection tool to confirm the problem.
- Your site has a large number of duplicate pages, typically because it uses parameters to filter or sort a common collection (for example:
sort=price). These pages will be labeled as "duplicate" or "alternate" in the Index Coverage report.
- The URL redirects to another URL. Redirect URLs are not indexed; the redirect target is.
Google can't access my sitemap
Be sure that your sitemap is not blocked by robots.txt, is valid, and that you're using the proper URL in your robots.txt entry or Sitemaps report submission. Test your sitemap URL using a publicly available sitemap testing tool.
Why does Google keep crawling a page that was removed?
Google continues to crawl all known URLs even after they return 4XX errors for a while, in case it's a temporary error. The only case when a URL won't be crawled is when it returns a noindex directive.
To avoid showing you an eternally growing list of 404 errors, the Index Coverage report shows only URLs that have shown 404 errors in the past month.
I can see my page, why can't Google?
Use the URL Inspection tool to see whether Google can see the live page. If it can't, it should explain why. If it can, the problem is likely that the access error has been fixed since the last crawl. Run a live crawl using the URL Inspection tool and request indexing.
The URL Inspection tool shows no problems, but the Index Coverage report shows an error; why?
You might have fixed the error after the URL was last crawled by Google. Look at the crawl date for your URL (which should be visible in either the URL details page in the Index Coverage report or in the indexed version view in the URL Inspection tool). Determine if you made any fixes since the page was crawled.
How do I find the index state of a specific URL?
To learn the index status of a specific URL, use the URL Inspection tool. You can't search or filter by URL in the Index Coverage report.
The following reasons can be shown for non-indexing, or for problematic indexing, in the Page indexing report:
These pages have not been indexed, but not necessarily because of an error. Read the specific description to see if this is an error that you should address.
Google experienced one of the following redirect errors:
- A redirect chain that was too long
- A redirect loop
- A redirect URL that eventually exceeded the max URL length
- A bad or empty URL in the redirect chain
Use a web debugging tool, such as Lighthouse, to get more details about the redirect.
This page was blocked by your site's robots.txt file.. You can verify this using the robots.txt tester. Note that this does not mean that the page won't be indexed through some other means. If Google can find other information about this page without loading it, the page could still be indexed (though this is less common). To ensure that a page is not indexed by Google, remove the robots.txt block and use a 'noindex' directive.
When Google tried to index the page it encountered a 'noindex' directive and therefore did not index it. If you do not want this page indexed, congratulations! If you do want this page to be indexed, you should remove that 'noindex' directive. To confirm the presence of this tag or directive, request the page in a browser and search the response body and response headers for "noindex". If you want this page to be indexed, you must remove the tag or HTTP header. Use the URL Inspection tool to confirm the error:
- Click the inspection icon next to the URL in the table.
- Under Coverage > Indexing > Indexing allowed? the report should show that noindex is preventing indexing.
- Confirm that the noindex tag still exists in the live version:
- Clicking Test live URL
- Under Availability > Indexing > Indexing allowed? see if the noindex directive is still detected. If noindex is no longer present, you can click Request Indexing to ask Google to try again to index the page. If noindex is still present, you must remove it in order for the page to be indexed.
The page request returns what we think is a soft 404 response. This means that it returns a user-friendly "not found" message without a corresponding 404 response code. We recommend returning a 404 response code for truly "not found" pages, or adding more information to the page to let us know that it is not a soft 404. Learn how to fix this.
The page was blocked to Googlebot by a request for authorization (401 response). If you do want Googlebot to be able to crawl this page, either remove authorization requirements for this page, or else allow Googlebot to access your pages by verifying its identity. You can verify this error by visiting the page in incognito mode.
This page returned a 404 error when requested. Google discovered this URL without any explicit request or sitemap. Google might have discovered the URL as a link from another site, or possibly the page existed before and was deleted. Googlebot will probably continue to try this URL for some period of time; there is no way to tell Googlebot to permanently forget a URL, although it will crawl it less and less often. 404 responses are not a problem, if intentional. If your page has moved, use a 301 redirect to the new location. See Fixing 404 errors
The user agent provided credentials, but was not granted access. However, Googlebot never provides credentials, so your server is returning this error incorrectly. If you don't want this page to be crawled, then block the page using robots.txt or noindex. If you do want Googlebot to crawl this page, you should either admitting non-signed-in users or explicitly allowlist googlebot.
The server encountered a 4xx error not covered by any other issue type described here. Try debugging your page using the URL Inspection tool.
The page is currently blocked by a URL removal request. If you are a verified site owner, you can use the URL removals tool to see who submitted a URL removal request. Removal requests are only good for about 90 days after the removal date. After that period, Googlebot may go back and index the page even if you do not submit another index request. If you don't want the page indexed, use 'noindex', require authorization for the page, or remove the page.
Crawled - currently not indexed
The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.
The page was found by Google, but not crawled yet. Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the crawl. This is why the last crawl date is empty on the report.
This page is a duplicate of a page that Google recognizes as canonical. This page correctly points to the canonical page, which is indexed, so there is nothing for you to do.
This page has duplicates, none of which is marked canonical. We think this page is not the canonical one. You should explicitly mark the canonical for this page. Inspecting this URL should show the Google-selected canonical URL.
This page is marked as canonical for a set of pages, but Google thinks another URL makes a better canonical. Google has indexed the page that we consider canonical rather than this one. We recommend that you explicitly mark this page as a duplicate of the canonical URL. This page was discovered without an explicit crawl request. Inspecting this URL should show the Google-selected canonical URL.
The URL is a redirect, and therefore was not added to the index.
Warnings are listed in the Improve page experience table on the summary page of the Page indexing report. These issues didn't prevent a page from being indexed, but they do reduce Google's ability to understand and index your pages.
The page was indexed, despite being blocked by your website's robots.txt file. Google always respects robots.txt, but this doesn't necessarily prevent indexing if someone else links to your page. Google won't request and crawl the page, but we can still index it, using the information from the page that links to your blocked page. Because of the robots.txt rule, any snippet shown in Google Search results for the page will probably be very limited.
- If you do want to block this page from Google Search, robots.txt is not the correct mechanism to avoid being indexed. To avoid being indexed, remove the robots.txt block and use 'noindex'.
- If you do not want to block this page, update your robots.txt file to unblock your page. You can use the robots.txt tester to determine which rule is blocking this page.
This page appears in the Google index, but for some reason Google could not read the content. Possible reasons are that the page might be cloaked to Google or the page might be in a format that Google can't index. This is not a case of robots.txt blocking. Inspect the page, and look at the Coverage section for details.
You can see your indexed URL count in the graph on the summary page. You can see an example list of URLs and more information about them by clicking View data about indexed pages below the graph.
The page has been indexed successfully. However, it might have other issues that should be addressed, such as mobile usability or structured data issues. Any other issues will be described in the appropriate section in the URL Inspection report.