Content policies for Google Search

Google uses automated systems to discover content from the web and other sources. These systems generate search results that provide useful and reliable responses to billions of searches we process each day.

Given how Search encompasses trillions of web pages, images, videos and other content, the results might occasionally contain material that some find objectionable, offensive, or problematic.

We’ve carefully developed the content policies for Google Search listed below to balance the real concerns about such issues, alongside the need for a search engine to provide access to information. Our automated systems are designed not to surface content that violates our policies. In some cases, we may manually remove them. Learn more about how we maximize access to information.

Overall content policies for Google Search

These policies apply to content surfaced anywhere within Google Search, which includes web results. Web results are web pages, images, videos, news content or other material that Google finds from across the web.

Expand all Collapse all

Child sexual abuse imagery or exploitation material

We block search results that lead to child sexual abuse imagery or material that appears to victimize, endanger, or otherwise exploit children. Learn how to report child sexual abuse imagery.

Highly personal information

Google might remove certain personal information that creates significant risks of identity theft, financial fraud, or other specific harms, which include, but aren't limited to, doxxing content, explicit personal images, and involuntary fake pornography. Learn how to remove your personal information from Google.


We take action against spam, which is content that shows behavior designed to deceive users or manipulate our search systems. Learn more about our spam policies for Google web search.

Webmaster & site owner requests

Upon request, we remove content that webmasters or site owners wish to block from our web results. Learn how to block access to your content and how to remove information from Google.

Valid legal requests

We remove content or features from our Search results for legal reasons. For example, we remove content if we receive valid notification under the US Digital Millennium Copyright Act (DMCA). We also remove content from local versions of Google, consistent with local law, when we're notified that content is an issue. For example, we remove content that illegally glorifies the Nazi party from our German service, or that unlawfully insults religion from our Indian service. We delist pages on name queries, based on data-protection requests, under what’s commonly known as the “Right to be Forgotten” in the EU. We scrutinize these requests to ensure that they're well-founded, and we frequently refuse to remove content when there's no clear basis in law.

When possible, we display a notification that results have been removed and report these removals to Lumen Database, a project run by the Berkman Center for Internet and Society, which tracks online restrictions on speech. We also disclose certain details about legal removals from our Search results through our Transparency Report. Learn how to make a Legal Removals Request.

Search features policies

These policies apply to many of our search features. Even though these features and the content within them is automatically generated as with web results, how they're presented might be interpreted as having greater quality or credibility than web results. We also don't want predictive or refinement features to unexpectedly shock or offend people. Search features covered by these policies include panels, carousels, enhancements to web listings (such as through structured data), predictive and refinement features, and results and features spoken aloud. These policies don't apply to web results.

Expand all Collapse all


We don’t allow content that primarily advertises products or services, which includes direct calls to purchase, links to other websites, company contact information, and other promotional tactics. We don’t allow sponsored content that’s concealed or misrepresented as independent content.

Dangerous content

We don’t allow content that could directly facilitate serious and immediate harm to people or animals. This includes, but isn't limited to, dangerous goods, services or activities, and self-harm, such as mutilation, eating disorders, or drug abuse.

Deceptive practices

We don’t allow content or accounts that impersonate any person or organization, misrepresent or hide ownership or primary purpose, or engage in false or coordinated behavior to deceive, defraud, or mislead. This includes, but isn’t limited to:

  • Misrepresentation or concealment of country of origin, government or political interest group affiliation.
  • Directing content to users in another country under false premises.
  • Working together in ways that conceal or misrepresent information about relationships or editorial independence.

This policy doesn't cover content with certain artistic, educational, historical, documentary, or scientific considerations, or other substantial benefits to the public.

Harassing content

We don’t allow harassment, bullying, or threatening content. This includes, but isn't limited to, content which might:

  • Single someone out for malicious abuse.
  • Threaten someone with serious harm.
  • Sexualize someone in an unwanted way.
  • Expose private information of someone that could be used to carry out threats.
  • Disparage or belittle victims of violence or tragedy.
  • Deny an atrocity.
  • Cause harassment in other ways.
Hateful content

We don't allow content that promotes or condones violence, promotes discrimination, disparages or has the primary purpose of inciting hatred against a group. This includes, but isn't limited to, targeting on the basis of race, ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that's associated with systemic discrimination or marginalization (like refugee status, immigration status, caste, the impoverished, and the homeless).

Manipulated media

We don't allow audio, video, or image content that's been manipulated to deceive, defraud, or mislead by means of creating a representation of actions or events that verifiably didn't take place. This includes if such content would cause a reasonable person to have a fundamentally different understanding or impression, such that it might cause significant harm to groups or individuals, or significantly undermine participation or trust in electoral or civic processes.

Medical content

We don't allow content that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices.

Regulated goods

​We don’t allow content that primarily facilitates the promotion or sale of regulated goods and services such as alcohol, gambling, pharmaceuticals, unapproved supplements, tobacco, fireworks, weapons, or health and medical devices.

Sexually explicit content

We don’t allow content that contains nudity, graphic sex acts, or sexually explicit material. Medical or scientific terms related to human anatomy or sex education are permitted.

Terrorist content

We don’t allow content that promotes terrorist or extremist acts, which includes recruitment, inciting violence, or the celebration of terrorist attacks.

Violence & gore

We don’t allow violent or gory content that's primarily intended to be shocking, sensational, or gratuitous.

Vulgar language & profanity

We don’t allow obscenities or profanities that are primarily intended to be shocking, sensational, or gratuitous.

Feature-specific policies

Some search features have specific policies that are necessary due to the particular ways they work. To learn more, go to the following pages:

Clear search
Close search
Google apps
Main menu
Search Help Center