Images cannot be crawled because of robots.txt restriction

What's the problem?

Some of your items specify an image (via the 'image link' attribute) that cannot be crawled by Google because robots.txt forbids Google's crawler to download the image. These items will remain disapproved until we are able to crawl the image.

Why should you fix this?

Since images are a key part of the buying decision of online shoppers, all items in Shopping ads require an image.

How can you fix this?

Please update the robots.txt file on your web server to allow Google's crawler to fetch the provided images. The robots.txt file can usually be found in the root directory of the web server (e.g. http://www.example.com/robots.txt). In order for us to access your whole site, ensure that your robots.txt file allows both user-agents 'Googlebot-image' (used for images) and 'Googlebot' (used for web pages) to crawl your site. You can do this by changing your robots.txt file as follows:

User-agent: Googlebot
Disallow:
User-agent: Googlebot-image
Disallow:

You can learn more about how to configure robots.txt here. You can test your current configuration with the Fetch as Google tool.

If you have fixed these issues and updated your items via a new feed upload or the Content API, the errors you see should disappear within a couple of days. This time allows us to verify that we can crawl the images that are provided, after which the items will start showing up in Shopping ads again. If you want to speed up the process you can increase Google's crawl rate.

Was this helpful?
How can we improve it?

Need more help?

Sign in for additional support options to quickly solve your issue