Crawl errors

robots.txt fetch error

What is a robots.txt fetch error?

Before Googlebot crawls your site, it accesses your robots.txt file to determine if your site is blocking Google from crawling any pages or URLs. If your robots.txt file exists but is unreachable (in other words, if it doesn’t return a 200 or 404 HTTP status code), we’ll postpone our crawl rather than risk crawling URLs that you do not want crawled. When this happens, Googlebot will return to your site and crawl it as soon as we can successfully access your robots.txt file. More information about the robots exclusion protocol.

How deal with robots.txt file errors 

  • You don't always need a robots.txt file.
    You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file—not even an empty one. If you don’t have a robots.txt file, your server will return a 404 when Googlebot requests it, and we will continue to crawl your site. No problem.
  • Make sure your robots.txt file can be accessed by Google.
    It's possible that your server returned a 5xx (unreachable) error when we tried to retrieve your robots.txt file. Check that your hosting provider is not blocking Googlebot.  If you have a firewall, make sure that its configuration is not blocking Google.
Was this article helpful?
Yes
No