Google’s Efforts to Combat Online Child Sexual Abuse Material FAQs

What is CSAM?

CSAM stands for child sexual abuse material. It consists of any visual depiction, including but not limited to photos, videos, and computer-generated imagery, involving the use of a minor engaging in sexually explicit conduct. Our interpretation of CSAM follows the US federal definition of “child pornography.”  

What is Google’s approach to combating CSAM?

Google is committed to fighting CSAM online and preventing our platforms from being used to spread this material. We devote significant resources—technology, people, and time—to detecting, deterring, removing, and reporting child sexual exploitation content and behavior. For more on our efforts to protect children and families see the Google Safety Center and YouTube’s Community Guidelines.

How does Google identify CSAM on its platform?

We invest heavily in fighting child sexual exploitation online and use technology to deter, detect, and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs, to detect, remove, and report CSAM on our platforms. We deploy hash matching, including YouTube’s CSAI Match, to detect known CSAM.  We also deploy machine learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritize abuse content for human review. 

Both CSAI Match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms—please see here for more details.

What does Google do when it detects CSAM on its platform?

When we detect CSAM on our platforms, we remove it, make a “CyberTipline” report to NCMEC, and may terminate the user’s account. NCMEC serves as a clearinghouse and comprehensive reporting center in the United States for issues related to child exploitation. Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world.

What is a CyberTipline report and what type of information does it include?

Once Google becomes aware of apparent CSAM, we make a report to NCMEC. These reports are commonly referred to as CyberTipLine reports, or CyberTips. In addition, we attempt to identify cases involving hands on abuse of a minor, production of CSAM, or child trafficking. In those instances, we send a supplemental CyberTip report to NCMEC to help prioritize the matter. A report sent to NCMEC may include information identifying the user, the minor victim, and/or other helpful contextual facts.

How can government agencies send legal requests to Google associated with a CyberTip?

Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world. Law enforcement may then send legal process to Google seeking further information. In order to facilitate such requests, Google provides an online system that allows verified government agencies to securely submit requests for further information. These agencies can then view the status of submitted requests using this online system, and, ultimately, download Google’s response to their request. For more information, see our policies for how Google handles government requests for user information.

 How can I report suspected CSAM?

If you find a link, website, or any content that is CSAM, you can report it to the police, NCMEC, or an appropriate organization in your locale. If you see or experience inappropriate content or behavior towards children on Google’s products, there are many ways to report it to us. You can help prevent people from contacting your child on Google products and filter the content your child sees by managing their Google Account settings.

Which teams review CSAM reports?

Human review is a crucial part of our ongoing work to combat CSAM. Reviewer teams have specialized training and receive wellbeing support. To learn more about how Google approaches content moderation, including how we support reviewer wellness, see here.

What time period does this report cover?

Metrics presented here represent data gathered from 12:00am PST on January 1st to 11:59pm PDT on June 30th and 12:00am PDT on July 1st to 11:59pm PST on December 31st, unless otherwise specified.
Google apps
Main menu
Search Help Center