Google’s Efforts to Combat Online Child Sexual Abuse Material FAQs

What is CSAM?

CSAM stands for child sexual abuse material. It consists of any visual depiction, including but not limited to photos, videos, and computer-generated imagery, involving the use of a minor engaging in sexually explicit conduct. Our interpretation of CSAM follows the US federal definition of “child pornography.” We recognize that CSAM is not the same thing as child nudity, and our policies and systems are specifically designed to recognize and distinguish benign imagery like a child playing in the bathtub or backyard, which is not sexual in nature and not CSAM, from imagery that involves the sexual abuse of a child or lascivious exhibition in violation of the law.

What is Google’s approach to combating CSAM?

Google is committed to fighting CSAM online and preventing our platforms from being used to spread this material. We devote significant resources—technology, people, and time—to detecting, deterring, removing, and reporting child sexual exploitation content and behavior. For more on our efforts to protect children and families see the Google Safety Center and YouTube’s Community Guidelinesour Protecting Children Site and our blog on how we detect, remove and report CSAM.

How does Google identify CSAM on its platform?

We invest heavily in fighting child sexual exploitation online and use technology to deter, detect, and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs, to detect, remove, and report CSAM on our platforms. We deploy hash matching, including YouTube’s CSAI Match, to detect known CSAM.  We also deploy machine learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Detection of never-before-seen CSAM helps the child safety ecosystem in a number of ways, including identifying child victims in need of safeguarding and contributing to the hashset to grow our abilities to detect known CSAM. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritize abuse content for human review. 

Both CSAI Match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms—please see here for more details.

What does Google do when it detects CSAM on its platform?

When we detect CSAM on our platforms, we remove it, make a “CyberTipline” report to NCMEC, and may terminate the user’s account. NCMEC serves as a clearinghouse and comprehensive reporting center in the United States for issues related to child exploitation. Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world.

What is a CyberTipline report and what type of information does it include?

Once Google becomes aware of apparent CSAM, we make a report to NCMEC. These reports are commonly referred to as CyberTipLine reports, or CyberTips. In addition, we attempt to identify cases involving hands on abuse of a minor, production of CSAM, or child trafficking. In those instances, we send a supplemental CyberTip report to NCMEC to help prioritize the matter. A report sent to NCMEC may include information identifying the user, the minor victim, and/or other helpful contextual facts.

Below are some examples of the real world impact of CyberTip reports Google has submitted. They provide a glimpse at the wide range of reports we make, but they are not comprehensive.  

  • A Google Cybertip reported numerous pieces of CSAM involving elementary school children taken in a classroom setting. Some of the reported CSAM was previously unidentified by Google and appeared to have been produced by the Google Account holder. NCMEC forwarded the report to law enforcement, which led to the identification and safeguarding of two minor children depicted in the reported CSAM imagery.  
  • A Google Cybertip reported the solicitation and production of CSAM by an account holder, who requested numerous videos to be made that depicted the hands-on-abuse of dozens of minor boys in exchange for money. NCMEC forwarded the report to law enforcement. The account holder was convicted for production of CSAM and several dozens of children were identified and safeguarded from ongoing abuse.
  • A Google CyberTip reported a single piece of known CSAM content that led to the apprehension of the account holder, who, according to law enforcement, was found to be in possession of much more CSAM and directly involved in the hands-on-abuse of minors in their care and providing those minors for others to abuse as well.  Due to the efforts by Google, NCMEC, and law enforcement, three children were rescued from sexual abuse.
  • A Google Cybertip reported CSAM that was produced by the Google account holder and solicited from minors the account holder had online access to. The account holder was later apprehended and determined by law enforcement to be in a position of trust as a medical professional: they used this position to abuse patients in their care and had direct access to minors online from whom they solicited the production of CSAM.

What does Google do to deter users from seeking out CSAM on Search?

Google deploys safety by design principles to deter users from seeking out CSAM on Search. It's our policy to block search results that lead to child sexual abuse material that appears to sexually victimize, endanger, or otherwise exploit children. We are constantly updating our algorithms to combat these evolving threats. We apply extra protections to searches that we understand are seeking CSAM content. We filter out explicit sexual results if the search query seems to be seeking CSAM, and for queries seeking adult explicit content, Search won’t return imagery that includes children, to break the association between children and sexual content. In many countries, users who enter queries clearly related to CSAM are shown a prominent warning that child sexual abuse imagery is illegal, with information on how to report this content to trusted organizations. When these warnings are shown, we have found that users are less likely to continue looking for this material.

How does Google contribute to the child safety ecosystem to combat CSAM?

Google’s child safety team builds technology that accurately detects, reports and removes CSAM to protect our users and prevent children from being harmed on Google products. To ensure the broader ecosystem also has access to this technology and help prevent online proliferation of child sexual abuse material, we developed the Child Safety toolkit to share this powerful technology with the rest of the industry and help prevent online proliferation of child sexual abuse material. Additionally, we provide Google’s Hash Matching API to NCMEC to help them prioritize and review CyberTipline reports more efficiently, allowing them to hone in on those reports involving children who need immediate help. 

We are also an active member of several coalitions, such as the Technology Coalition, the ICT Coalition, the WeProtect Global Alliance, and INHOPE and the Fair Play Alliance, that bring companies and NGOs together to develop solutions that disrupt the exchange of CSAM online and prevent the sexual exploitation of children.

How can government agencies send legal requests to Google associated with a CyberTip?

Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world. Law enforcement may then send legal process to Google seeking further information (Law Enforcement Request System - LERS). In order to facilitate such requests, Google provides an online system that allows verified government agencies to securely submit requests for further information. These agencies can then view the status of submitted requests using this online system, and, ultimately, download Google’s response to their request. For more information about LERS or to set up a LERS account please visit For more information, see our policies for how Google handles government requests for user information.

 How can I report suspected CSAM?

If you find a link, website, or any content that is CSAM, you can report it to the police, NCMEC, or an appropriate organization in your locale. If you see or experience inappropriate content or behavior towards children on Google’s products, there are many ways to report it to us. You can help prevent people from contacting your child on Google products and filter the content your child sees by managing their Google Account settings.

Which teams review CSAM reports?

Human review is a crucial part of our ongoing work to combat CSAM. Our team members bring deep expertise to this work with backgrounds in law, child safety and advocacy, social work, and cyber investigations, among other disciplines. They are specially trained on both our policy scope and what legally constitutes child sexual abuse material. Reviewer teams have specialized training and receive wellbeing support. To learn more about how Google approaches content moderation, including how we support reviewer wellness, see here.

What time period does this report cover?

Metrics presented here represent data gathered from 12:00am PST on January 1st to 11:59pm PDT on June 30th and 12:00am PDT on July 1st to 11:59pm PST on December 31st, unless otherwise specified.
Google apps
Main menu
Search Help Center