Content verification monitors websites to ensure your ads aren't displayed on pages with inappropriate content. Verification determines whether a website contains content issues, and reports those issues based on your criteria. You can control the content types that Verification reports by editing your account, campaign, or advertiser settings. Learn more about Verification settings.
There are two types of content categories available: sensitive categories which are available by default, and custom sensitive categories, which you can create to suit your specific business needs.
You can also control where ads will serve by using standard and custom sensitive categories, and then create a list of specific flagged domains. More about flagging content issues here.Navigating the Content view
Content issues are shown, by placement, domain, and site, in a customizable table. You choose which issues to display.
- Sensitive category issues
- Custom sensitive category issues
- Flagged domains
The table can be re-sorted depending on the view you have chosen. For example after displaying sensitive category issues, you can sort on any of these columns:
- Sensitive category issues
- Content score
Click the arrow to sort in ascending or descending order.Drilling down into a domain
Click on a domain name to show related subdomains. Click the link in the table columns (impressions, Sensitive category issues, or Content scores) to show a screenshot of the URL's landing page.
Click on Report Misclassification to report a misclassified url. Please include an explanation, and click submit
Editing domains in bulk
Click on a flag to add the domain to a flagged domains list, or on a checkmark to add it to a list of allowed domains.
To bulk-edit domains, check the boxes next to the domains, then select one of the following Actions:
- Add to flagged domain list
- Add to allowed domains
- Remove from flagged domain list
- Remove from allowed domains
"Weapons" and "sensitive social issues" sensitive categories have merged. Selecting either "weapons" or "sensitive social issues" will result in selecting both.
Verification categorizes websites into the following types of content issues:
Not yet labeled: The content hasn't been classified yet.
Sexual: Sexual content including text, images, or videos.
Alcohol: Contains content related to alcoholic beverages, alcohol brands, recipes, etc.
Derogatory: Content that may be construed as biased against individuals, groups, or organizations based on criteria such as race, religion, disability, sex, age, veteran status, sexual orientation, gender identity, or political affiliation. May also indicate discussion of such content, for instance, in an academic or journalistic context.
Downloads & file sharing: Content related to audio, video, or software downloads.
Drugs: Contains content related to the recreational use of legal or illegal drugs, as well as to drug paraphernalia or cultivation.
Gambling: Contains content related to betting or wagering in a real-world or online setting.
Politics: Political news and media, including discussions of social, governmental, and public policy.
Profanity: Prominent use of words considered indecent, such as curse words and sexual slang. Pages with only very occasional usage, such as news sites that might include such words in a quotation, are not included.
Religion: Content related to religious thought or beliefs.
Sensitive social issues: Issues that evoke strong, opposing views and spark debate. These include issues that are controversial in most countries and markets (such as abortion), as well as those that are controversial in specific countries and markets (such as immigration reform in the United States).
Shocking: Content which may be considered shocking or disturbing, such as violent news stories, stunts, or toilet humor.
Suggestive: Adult content, as well as suggestive content that's not explicitly sexual content. This category includes all pages categorized as adult.
Tobacco: Contains content related to tobacco and tobacco accessories, including lighters, humidors, ashtrays, etc.
Tragedy: Content related to death, disasters, accidents, war, etc.
Transportation accidents: Content related to motor vehicle, aviation or other transportation accidents.
Violence: Content which may be considered graphically violent, gory, gruesome, or shocking, such as street fighting videos, accident photos, descriptions of torture, etc.
Weapons: Contains content related to personal weapons, including knives, guns, small firearms, and ammunition.
Google's content scoring helps identify and resolve content issues that advertisers typically care the most about like: Adult, Suggestive, and Profanity. Scores are calculated based on the severity of the issue on the page and the overall quality of the site (originality of content, writing quality, number of impressions, and size of the site). Content scores ranked by greatest to least concern are: Severe, High, Medium, Low, and None.
A low readership personal blog with flagged impressions for "Profanity" will have a worse content score than a major online newspaper with flagged "Profanity" impressions. This is because the content of the news site is seen as more reputable and useful.
Content scores are calculated based on the specific content issues you select on the Settings page.
If Verification says your ads appeared alongside inappropriate content and your publisher partner denies any wrongdoing, please keep the following explanations in mind:
Scraping: Anyone can copy an ad tag and place it on an inappropriate site. Ad tags contain unique identifiers for campaigns, placements, and sites, so our tool associates the misplaced ad with the existing Campaign Manager 360 campaign/placement/site. Contact the publishers in question and ask them to remove your ad tag(s).
Thumbnail accuracy: A thumbnail is a snapshot of a webpage taken from the URL that generated an ad request for a given tag. Snapshots are not generally taken at the same time that ads are served. For dynamic sites, the content--and how appropriate it is to your campaign--at the time the ad served may have been different from what you see in the snapshot.