Learn how Google gets your site content
Google uses a combination of software programs and algorithms to get your web content so Google Search users who might be interested in your site can find it. As such, the processes by which Google retrieves your content are mostly automatic and require little to no effort on your part.
Crawling is one process run by Google that gathers public web content for Google Search results. For the crawling process, Google uses specialized software, called web crawlers, that find and retrieve websites automatically.
Really simply, web crawlers get content off the web for search engines. Web crawlers operate by following links from site to site, downloading the pages they encounter and storing them for later use. Complex algorithms then sort and analyze the downloaded webpage copies to update Google Search Engine results. The main web crawler Google uses is called Googlebot.
If Google cannot properly crawl or render your pages, your site’s visibility and appearance in Google Search results can be affected in a couple different ways:
- If Google is unable to crawl your site, we are unable to get any information about your site. Google Search might not discover all parts of your site, or properly identify the Google Search user queries that are most relevant to your web pages, for which your site should appear in Google Search results.
- If Google cannot render the pages on your site, it becomes more difficult to understand your web content because we are missing key visual layout information about your web pages. As a result, the visibility of your site content in Google Search can suffer. We need to render your web pages to estimate the value of your site to different audiences, and to determine where links to your site show up in Google Search results.
Luckily, you can use the Fetch as Google tool to diagnose the crawling and rendering of your pages, and improve your Google Search results to ultimately reach your target audience.