Product pages cannot be crawled because of robots.txt restriction
What's the problem?
Some of your items specify a landing page (via the 'link' attribute) that cannot be crawled by Google because robots.txt forbids Google's crawler to download the landing page. These items will remain disapproved and stop showing up on Google Shopping until we are able to crawl the landing page.
Why should you fix this?
Google Shopping users expect that the information on your landing pages matches what is shown on Google Shopping. To ensure this seamless user experience we perform automated quality and policy checks on product landing pages. These checks require us to download the landing pages with Google's crawling system.
How can you fix this?
Please update the robots.txt file on your web server to allow Google's crawler to fetch the provided landing pages. The robots.txt file can usually be found in the root directory of the web server (e.g. http://www.example.com/robots.txt). In order for us to access your whole site, ensure that your robots.txt file allows both user-agents 'Googlebot' (used for landing pages) and 'Googlebot-image' (used for images) to crawl your site. You can do this by changing your robots.txt file as follows:
If you have fixed these issues and updated your items via a new feed upload or the Content API, the errors you see here should disappear within a couple of days. This time allows us to verify that we can crawl the landing pages that are provided, after which the items will start showing up on Google Shopping again. If you want to speed up the process you can increase Google's crawl rate.