Search
Clear search
Close search
Google apps
Main menu
true

Product pages cannot be crawled because of robots.txt restriction

What's the problem?

Some of your items specify a landing page (via the 'link' attribute) that cannot be crawled by Google because robots.txt forbids Google's crawler to download the landing page. These items will remain disapproved and stop showing up in your Shopping ads until we are able to crawl the landing page.

Why should you fix this?

Users expect that the information on your landing pages matches what is shown in your Shopping ads. To ensure this seamless user experience we perform automated quality and policy checks on product landing pages. These checks require us to download the landing pages with Google's crawling system.

How can you fix this?

Please update the robots.txt file on your web server to allow Google's crawler to fetch the provided landing pages. The robots.txt file can usually be found in the root directory of the web server (e.g. http://www.example.com/robots.txt). In order for us to access your whole site, ensure that your robots.txt file allows both user-agents 'Googlebot' (used for landing pages) and 'Googlebot-image' (used for images) to crawl your site. You can do this by changing your robots.txt file as follows:

User-agent: Googlebot
Disallow:

User-agent: Googlebot-image
Disallow:

You can learn more about how to configure robots.txt here. You can test your current configuration with the Fetch as Google tool.

If you have fixed these issues and updated your items via a new feed upload or the Content API, the errors you see here should disappear within a couple of days. This time allows us to verify that we can crawl the landing pages that are provided, after which the items will start showing up in your Shopping ads again. If you want to speed up the process you can increase Google's crawl rate.

Was this article helpful?
How can we improve it?