How automation is used in content moderation

Google provides translated versions of our Help Center, though they are not meant to change the content of our policies. The English version is the official language we use to enforce our policies. To view this article in a different language, use the language dropdown at the bottom of the page.

To keep ads safe and appropriate for everyone, ads are reviewed to make sure they comply with Google Ads policies. Most ads are reviewed within one business day.

We use a combination of Google's AI and human evaluation to detect and remove ads which violate our policies and are harmful to users and the overall Google Ads ecosystem. Our enforcement technologies may use Google's AI, modeled on human reviewers’ decisions, to help protect our users and keep our ad platforms safe. The policy-violating content is either removed by Google's AI or, where a more nuanced determination is required, it is flagged for further review by trained operators and analysts who conduct content evaluations that might be difficult for algorithms to perform alone, for example because an understanding of the context of the ad is required. The results of these manual reviews are then used to help build training data to further improve our machine learning models.

When reviewing content or accounts to determine whether they violate our policies, we take various information into consideration when making a decision, including the content of the creative (e.g. ad text, keywords, and any images and video) as well as the associated ad destination. We also consider account information (e.g., past history of policy violations) and other information provided through reporting mechanisms (where applicable) and our investigation.

Was this helpful?

How can we improve it?

Need more help?

Try these next steps:

Search
Clear search
Close search
Google apps
Main menu