robots.txt is the name of a text file file that tells search engines which URLs or directories in a site should not be crawled. This file contains rules that block individual URLs or entire directories to specific crawlers (or all crawlers). It is created by the website owner and stored in the site's home directory in order to prevent the site from spending time and energy serving crawl requests for pages or resources (such as images) that are not important enough to appear in search results. If you're a small site, you probably don't need a robots.txt file.

Do not use robots.txt to prevent a page from appearing in search results, only to prevent it from being crawled. Other techniques are used to prevent a page or image from appearing in search results. Learn more about robots.txt

Was this helpful?

How can we improve it?
New to Search Console?

Never used Search Console before? Start here, whether you're a complete beginner, an SEO expert, or a website developer.

Clear search
Close search
Google apps
Main menu