Google Publisher Policies

Google helps to enable a free and open web by helping publishers monetise their content and advertisers reach prospective customers with useful, relevant products and services. Maintaining trust in the ads ecosystem requires setting limits on what we'll monetise.

When you monetise your content with Google ad code, you're required to adhere to the following policies. By 'content', we mean anything present on your page or app – including other advertisements and links to other sites or apps. Failure to comply with these policies may result in Google blocking ads from appearing against your content, or suspending or terminating your account.

These restrictions apply in addition to any other policies governing your use of Google publisher products.

Illegal content

We do not allow content that:

  • is illegal, promotes illegal activity or infringes on the legal rights of others.

Intellectual property abuse

We do not allow content that:

  • infringes copyright. It's our policy to respond to notices of alleged infringement that comply with the Digital Millennium Copyright Act (DMCA). You can file a counter notification via this form.
  • sells or promotes the sale of counterfeit products. Counterfeit goods contain a trademark or logo that is identical to or substantially indistinguishable from the trademark of another. They mimic the brand features of the product in an attempt to pass themselves off as a genuine product of the brand owner.

Endangered or threatened species

We do not allow content that:

  • promotes the sale of products obtained from endangered or threatened species.

    Examples: Sale of tigers, shark fins, elephant ivory, tiger skins, rhino horn, dolphin oil

Dangerous or derogatory content

We do not allow content that:

  • incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity or other characteristic that is associated with systemic discrimination or marginalisation.

    Examples: Promoting hate groups or hate group paraphernalia, encouraging others to believe that a person or group is inhuman, inferior or worthy of being hated

  • harasses, intimidates or bullies an individual or group of individuals.

    Examples: Singling out someone for abuse or harassment, suggesting that a tragic event did not happen or that victims or their families are actors or complicit in a cover-up of the event

  • threatens or advocates physical or mental harm to oneself or others.

    Examples: Content advocating suicide, anorexia or other self-harm; promoting or advocating harmful health or medical claims or practices; threatening someone with real-life harm or calling for the attack of another person; promoting, glorifying or condoning violence against others; content made by or in support of terrorist groups or transnational drug trafficking organisations, or content that promotes terrorist acts, including recruitment, or that celebrates attacks by transnational drug trafficking or terrorist organisations

  • exploits others through extortion.

    Examples: Predatory removals, revenge porn, blackmail

Enabling dishonest behaviour

We do not allow content that:

  • helps users to mislead others.

    Examples: Creating fake or false documents such as passports, diplomas or accreditations; sale or distribution of term papers, paper-writing or exam-taking services; information or products for passing drug tests

  • promotes any form of hacking or cracking and/or provides users with instructions or equipment that tampers with or provides unauthorised access to software, servers or websites.

    Examples: Pages or products that enable illegal access of mobile phones and other communications or content delivery systems or devices; products or services that bypass copyright protection, including circumvention of digital rights management technologies; products that illegally descramble cable or satellite signals in order to get free services; pages that assist or enable users to download streaming videos if prohibited by the content provider

Misrepresentative content

We do not allow content that:

  • misrepresents, misstates or conceals information about you, your content or the primary purpose of your web destination.
  • entices users to engage with content under false or unclear pretences.
  • engages in 'phishing' for users’ information.
  • promotes content, products or services using false, dishonest or deceptive claims.

    Example: 'Get Rich Quick' schemes

  • makes claims that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic process.

    Examples: information about public voting procedures, political candidate eligibility based on age or birthplace, election results or census participation that contradicts official government records

  • falsely implies having an affiliation with, or endorsement by, another individual, organisation, product or service.

    Examples: Impersonating Google products, misusing company logos

  • deceives users through manipulated media related to politics, social issues or matters of public concern.

  • is about politics, social issues or matters of public concern directed at users in a country other than your own, if you misrepresent or conceal your country of origin or other material details about yourself.

Malicious or unwanted software

We do not allow content that:

  • contains malicious software or 'malware' that may harm or gain unauthorised access to a computer, device or network.

    Examples: Computer viruses, ransomware, worms, trojan horses, rootkits, keyloggers, diallers, spyware, rogue security software and other malicious programs or apps

  • violates Google's Unwanted Software policy.

    Examples: Failure to be transparent about the functionality that the software provides or the full implications of installing the software; failing to include Terms of Service or an End User Licence Agreement; bundling software or applications without the user's knowledge; making system changes without the user's consent; making it difficult for users to disable or uninstall the software; failing to properly use publicly available Google APIs when interacting with Google services or products

Sexually explicit content

We do not allow content that:

  • includes graphic sexual text, image, audio, video or games.

    Examples: Sex acts such as genital, anal and/or oral sex; masturbation; cartoon porn or hentai; graphic nudity

  • contains non-consensual sexual themes, whether simulated or real.

    Examples: Rape, incest, bestiality, necrophilia, snuff, lolita or teen-themed pornography, underage dating

  • may be interpreted as promoting a sexual act in exchange for compensation.

    Examples: Prostitution, companionship and escort services, intimate massage, cuddling sites

Mail-order brides

We do not allow content that:

  • facilitates marriage to a foreigner.

    Examples: Mail-order brides, international marriage brokers, romance tours

Adult themes in family content

We do not allow content that:

  • is made to appear appropriate for a family audience, but contains adult themes including sex, violence or other depictions of children or popular children’s characters that are unsuitable for a general audience.

Child sexual abuse material and paedophilia

We do not allow content that:

  • promotes the sexual exploitation of minors.

    Examples: Child sexual abuse imagery or other content that visually depicts, encourages or promotes sexual attraction by adults towards minors

Google absolutely prohibits monetisation of content related to child sexual abuse imagery or paedophilia. Google has always been at the forefront in the fight against online child abuse, and an avid supporter of family safety online. Under United States federal law, child sexual abuse imagery is defined as visual depictions of minors (i.e. under 18) engaged in a sexual act such as intercourse, oral sex or masturbation as well as lascivious depictions of the genitals (covered or uncovered). This definition extends to photographs, videos, cartoons, drawings, paintings and sculptures. The image can involve a real child; a computer-generated, morphed, composite or otherwise altered image that appears to be a child (think 'Photoshop'). This also includes soliciting minors for sexual acts, which is also known as 'enticement'. Paedophilia is any content or behaviour (images, texts, videos, etc.) that depicts, encourages or promotes sexual attraction by adults towards minors (i.e. under 18).

Was this helpful?
How can we improve it?