Site crawl errors report
The Crawl Errors report for websites provides details about the site URLs that Google could not successfully crawl or that returned an HTTP error code.
If you are using the report for apps, you can find the documentation here
The report has two main sections:
This section shows the main issues for the past 90 days that prevented Googlebot from accessing your entire site (click any box to display its chart):
- DNS errors—the Domain Name System, which translates the name of your site to a numerical address for the server that provides your website files.
- Server errors—your host provider's server functionality, such as connection reliability and speed.
- robots.txt failure—the file that Google must first access before it can crawl your site. It tells Google which pages it can and cannot access.
This section lists specific errors Google encountered when trying to crawl specific desktop, phone, or Android app pages. Each main section in the URL Errors reports corresponds to the different crawling mechanisms Google uses to access your pages, and the errors listed are specific to those kinds of pages:
Desktop or web pages accessed by Googlebot, Google's main web crawler.
Smartphone pages accessed by Googlebot for smartphones.
- Feature Phone
Feature phone pages accessed Googlebot for feature phones.
- Android Applications
Android app URIs crawled for deep linking with your site pages (Learn more).