Use Fetch as Google for websites

Test if Google can crawl your web page

The Fetch as Google tool enables you to test how Google crawls or renders a URL on your site. You can use Fetch as Google to see whether Googlebot can access a page on your site, how it renders the page, and whether any page resources (such as images or scripts) are blocked to Googlebot. This tool simulates a crawl and render execution as done in Google's normal crawling and rendering process, and is useful for debugging crawl issues on your site.

Open Fetch as Google for sites

For mobile apps, use the equivalent Fetch as Google for Apps tool

Run a fetch

  1. In the textbox, enter the path component of a URL on your site that you want Googlebot to fetch, relative to the site root. Leaving the textbox blank fetches the site root page. For example, if the current property is http://example.com, a request for stores/indiana/1234.html would fetch http://example.com/stores/indiana/1234.html
    Fetch restrictions:
    • Fetched URLs are limited to the current site: for example, if the current Search Console property is to http://example.com you cannot fetch a URL from https://example.com or http://m.example.com.
    • The fetch does not send any cookies, login information, or other state information.
    • The fetch will not follow a redirect. If you fetch a page with a redirect, you will have to follow it manually as described in the "Redirected" fetch status description below.
  2. Optionally choose a type of Googlebot you wish to perform the fetch as. This affects the crawler making the fetch, and also the rendering for a Fetch and Render request. The following types are available:
    1. Desktop [Default]
    2. Mobile: Smartphone
    3. Mobile: cHTML (a subset of mostly Japanese feature phones). Rendering not supported.
    4. Mobile: XHTML/WML (feature phones). Rendering not supported
  3. Click either Fetch or Fetch and Render:
    • Fetch: Fetches a specified URL in your site and displays the HTTP response. Does not request or run any associated resources (such as images or scripts) on the page. This is a relatively quick operation that you can use to check or debug suspected network connectivity or security issues with your site, and see the success or failure of the request.
    • Fetch and render: Fetches a specified URL in your site, displays the HTTP response and also renders the page according to a specified platform (desktop or smartphone). This operation requests and runs all resources on the page (such as images and scripts). Use this to detect visual differences between how Googlebot sees your page and how a user sees your page.
  4. The request will be added to the fetch history table, with a "pending" status. When the request is complete, the row will show the success or failure of the request and some basic information. Click any non-failed fetch row in the table to get additional details about the request, including raw HTTP response headers and data, and (for Fetch and Render) a list of blocked resources and a view of the rendered page.
  5. If the request succeeded and is less than four hours old, you can tell Google to re-crawl and possibly re-index the fetched page, and optionally any pages that the fetched page links to.

You have a weekly quota of 500 fetches. When you are approaching your limit, you will see a notification on the page.

Request fetch status

The fetch history table shows the last 100 fetch requests. To see details for a completed fetch, click on the corresponding row in the fetch history table. The following request fetch statuses can be displayed:

  • Complete: Google successfully contacted your site and crawled your page, and can get all resources referenced by the page. Click the table row to see more details about the fetch results.
  • Partial: Google got a response from your site and fetched the URL, but could not reach all resources referenced by the page because they were blocked by robots.txt files. If this is a fetch only, do a fetch and render. Examine the rendered page to see if any significant resources were blocked that could prevent Google from properly analyzing the meaning of the page. If significant resources were blocked, unblock the resources on robots.txt files that you own. For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot. See the list of resource fetch error descriptions.
  • Redirected: The server responded with a redirect. The Fetch as Google tool does not follow redirects. Although the actual Google crawler follows redirects, the Fetch as Google tool will not. You must follow a redirect manually:
    • If the redirect is to the same property, the tool displays a button that allows to quickly follow the redirect by populating the fetch box with the redirect URL.
    • If the URL redirects to another property that you own, you can click "Follow" to autopopulate the URL box, then copy the URL, switch views to the new site, and then paste the URL into the fetch box.
    You can inspect the HTTP response on the fetch details page to see the redirect details. Locate the HTTP error code to learn more. Redirects can be triggered by the server or by meta tags or JavaScript on the page itself.

Resource fetch errors

If the fetch request status is Partial, click the request to open the request details page. The table on the page will list any errors encountered. Typically the errors are due to blocked resources on the page. The following resource errors can occur in a fetch request:

Resource fetch error list
Status Explanation Notes and next steps

Not found

The resource could not be found (404 or 410 HTTP response codes).

This error indicates that you might see the HTTP 404 error code when you access your page using a web browser.

Not authorized

Googlebot isn't authorized to access the page (for example, if the page requires a password).

This error indicates that you might see the HTTP 403 error code when you access your page using a web browser.

DNS not found

Google couldn’t retrieve the resource because the domain name wasn’t found.

Make sure that you typed in your domain name properly (for example, www.example.com) so that Google can find your site server.

Blocked

The resource's host is blocking access to Googlebot by means of a robots.txt file.

This error is a common problem that you can fix by updating your robots.txt file. If your property address is at the root domain level (for example, www.example.com, not www.example.com/my_site/), you can use the robots.txt Tester tool to diagnose why the URL is blocked from Google.

Unreachable robots.txt

Googlebot can’t reach the resource host's robots.txt file. When that happens, Google avoids loading any resources from that host.

To resolve this issue, read our Help Center articles on how to create and test robots.txt files.

Unreachable

The resource host either took too long to reply or refused the request.

Check to see that your server is up and running.

Temporarily unreachable

1) Fetch as Google can’t currently fetch your URL because the server took too long to reply.

OR

2) Fetch as Google cancelled your fetch because too many consecutive requests were made to the server for different URLs.

Note the URL is not unreachable for all of Google-- it is just unreachable for the Fetch as Google simulation tool.

Error

An unspecified error prevented Google from completing the fetch.

If this error happens again, we ask that you contact Search Console product support.
Was this article helpful?