Verification

Content Verification

Content verification monitors websites to ensure your ads aren't displayed on pages with inappropriate content. Verification determines whether a website contains content issues,  and reports those issues based on your criteria. You can control the content types that Verification reports by editing your account, campaign, or advertiser settings. Learn more about Verification settings.

There are two types of content classifiers available: standard classifiers, which are available by default, and custom classifiers, which you can create to suit your specific business needs.

You can also control where ads will serve by using standard and custom classifiers, and then creating a blacklist to prevent serving ads on specific domains. More about flagging content issues here.

 Navigating the Content view

Content issues are shown, by placement, domain, and site, in a customizable table. You choose which issues to display.

  • Standard classifier issues
  • Custom classifier issues
  • Flagged domains
Sorting the table

The table can be re-sorted depending on the view you have chosen. For example after displaying standard classifier issues, you can sort on any of these columns:

  • Impressions
  • Standard classifier issues
  • Content score

Click the arrow to sort in ascending or descending order.

Drilling down into a domain

Click on a domain name to show related subdomains. Click numbers in the table columns (impressions, Standard classifier issues, or Content scores) to show a screenshot of the URL's landing page.

Click on Report Classification to report a misclassified url. Please include an explanation, and click submit

Editing domains in bulk

Click on a flag to add the domain to a flagged domains list, or on a checkmark to add it to a whitelist. 

To bulk-edit domains, check the boxes next to the domains, then select one of the following Actions:

  • Add to flagged domain list
  • Add to whitelist
  • Remove from flagged domain list
  • Remove from whitelist
Types of standard content classifiers
"Weapons" and "sensitive social issues" standard classifiers have merged. Selecting either "weapons" or "sensitive social issues" will result in selecting both.

Verification categorizes websites into the following types of content issues:

  • Adult: Adult or pornographic text, image, or video content.

  • Derogatory: Content that may be construed as biased against individuals, groups, or organizations based on criteria such as race, religion, disability, sex, age, veteran status, sexual orientation, gender identity, or political affiliation. May also indicate discussion of such content, for instance, in an academic or journalistic context.

  • Downloads & file sharing: Content related to audio, video, or software downloads.

  • Weapons: Contains content related to personal weapons, including knives, guns, small firearms, and ammunition.

  • Gambling: Contains content related to betting or wagering in a real-world or online setting.

  • Suggestive: Adult content, as well as suggestive content that's not explicitly pornographic. This category includes all pages categorized as adult.

  • Violence: Content which may be considered graphically violent, gory, gruesome, or shocking, such as street fighting videos, accident photos, descriptions of torture, etc.

  • Profanity: Prominent use of words considered indecent, such as curse words and sexual slang. Pages with only very occasional usage, such as news sites that might include such words in a quotation, are not included.

  • Drugs: Contains content related to the recreational use of legal or illegal drugs, as well as to drug paraphernalia or cultivation.

  • Alcohol: Contains content related to alcoholic beverages, alcohol brands, recipes, etc.

  • Tobacco: Contains content related to tobacco and tobacco accessories, including lighters, humidors, ashtrays, etc.

  • Politics: Political news and media, including discussions of social, governmental, and public policy.

  • Religion: Content related to religious thought or beliefs.

  • Tragedy: Content related to death, disasters, accidents, war, etc.

  • Transportation accidents: Content related to motor vehicle, aviation or other transportation accidents.

  • Sensitive social issues: Issues that evoke strong, opposing views and spark debate. These include issues that are controversial in most countries and markets (such as abortion), as well as those that are controversial in specific countries and markets (such as immigration reform in the United States).

Content scores

Google's content scoring helps identify and resolve content issues that advertisers typically care the most about like: Adult, Suggestive, and Profanity. Scores are calculated based on the severity of the issue on the page and the overall quality of the site (originality of content, writing quality, number of impressions, and size of the site). Content scores ranked by greatest to least concern are: Severe, High, Medium, Low, and None.

A low readership personal blog with flagged impressions for "Profanity" will have a worse content score  than a major online newspaper with flagged "Profanity" impressions. This is because the content of the news site is seen as more reputable and useful.

Content scores are calculated based on the specific content issues you select on the Settings page.

Troubleshooting: My ad ran on an inappropriate site and the publisher denies any wrongdoing

If Verification says your ads appeared alongside inappropriate content and your publisher partner denies any wrongdoing, please keep the following explanations in mind:

  • Scraping: Anyone can copy an ad tag and place it on an inappropriate site. Ad tags contain unique identifiers for campaigns, placements, and sites, so our tool associates the misplaced ad with the existing Campaign Manager campaign/placement/site. Contact the publishers in question and ask them to remove your ad tag(s).

  • Thumbnail accuracy: A thumbnail is a snapshot of a webpage taken from the URL that generated an ad request for a given tag. Snapshots are not generally taken at the same time that ads are served. For dynamic sites, the content--and how appropriate it is to your campaign--at the time the ad served may have been different from what you see in the snapshot.

Was this article helpful?
How can we improve it?