How to fix: Mobile page not crawlable due to robots.txt

Update your robots.txt file to allow user-agents "Googlebot" and "Googlebot-Image" to crawl your site

One or more of your products uses the link [link] attribute to specify a landing page that cannot be crawled by Google because robots.txt forbids Google's crawler to download the landing page when crawling with a mobile user-agent. This prevents Google from performing the automated quality and policy checks on product landing pages that ensure a seamless user experience. Affected products will stop showing in Shopping ads and free product listings for desktop and mobile devices until we are able to crawl the landing page.

To resolve this issue, update the robots.txt file on your web server to allow Google's crawler to fetch the provided landing pages. The robots.txt file can usually be found in the root directory of the web server (for example, In order for us to access your whole site, ensure that your robots.txt file allows both user-agents "Googlebot" (used for landing pages) and "Googlebot-image" (used for images) to crawl your site. Add the following lines to your robots.txt file:

User-agent: Googlebot

User-agent: Googlebot-image

Learn more about how to configure robots.txt. You can test your current configuration with the URL Inspection tool. Make sure to select "Smartphone" as the type of fetch.

If you fix these issues and update your items via a new feed upload or the Content API, the errors you see should disappear within a couple of days. This time allows us to verify that we can access the landing pages that are provided, after which the items will start showing up on Shopping ads and free listings again. If you want to speed up the process you can increase Google's crawl rate.

Was this helpful?
How can we improve it?

Need more help?

Sign in for additional support options to quickly solve your issue

Clear search
Close search
Google apps
Main menu
Search Help Center