How to fix: Desktop page not crawlable due to robots.txt

Update your robots.txt file to allow user-agents "Googlebot" and "Googlebot-Image" to crawl your site

Some of your products specify a landing page (via the link [link] attribute) that cannot be crawled by Google because robots.txt forbids Google's crawler to download the landing page. These products will remain disapproved and stop showing up in your Shopping ads and free product listings until we are able to crawl the landing page.

Update the robots.txt file on your web server to allow Google's crawler to fetch the provided landing pages. The robots.txt file can usually be found in the root directory of the web server (for example,

In order for us to access your whole site, ensure that your robots.txt file allows both user-agents 'Googlebot' (used for landing pages) and 'Googlebot-image' (used for images) to crawl your full site.

You can allow a full-site crawl by changing your robots.txt file as follows:

User-agent: Googlebot

User-agent: Googlebot-image

You can learn more about how to configure robots.txt here. You can test your current configuration with the URL Inspection tool.

If you have fixed these issues and updated your products via a new feed upload or the Content API, the errors you see here should disappear within a couple of days. This time allows us to verify that we can crawl the landing pages that are provided, after which the products will start showing up in your Shopping ads and listings again. If you want to speed up the process you can increase Google's crawl rate.

Was this helpful?
How can we improve it?

Need more help?

Sign in for additional support options to quickly solve your issue

Clear search
Close search
Google apps
Main menu
Search Help Center