Google’s Efforts to Combat Online Child Sexual Abuse Material FAQs

What is CSAM?

CSAM stands for child sexual abuse material. It consists of any visual depiction, including but not limited to photos, videos, and computer-generated imagery, involving the use of a minor engaging in sexually explicit conduct. We recognize that CSAM is not the same thing as child nudity, and our policies and systems are specifically designed to recognize and distinguish benign imagery like a child playing in the bathtub or backyard, which is not sexual in nature and not CSAM, from imagery that involves the sexual abuse of a child or lascivious exhibition of genitalia and intimate areas in violation of global laws. In addition, there are other types of obscene or exploitative imagery that include or represent children, such as cartoon depictions of child sexual abuse or attempts at humourous representations of child sexual exploitation. This imagery may also be a violation of global laws.

What is Google’s approach to combating CSAM?

Google is committed to fighting CSAM online and preventing our platforms from being used to create, store or distribute this material. We devote significant resources—technology, people, and time—to detecting, deterring, removing, and reporting child sexual exploitation content and behavior. For more on our efforts to protect children and families, see the Google Safety Center, YouTube’s Community Guidelines, our Protecting Children Site, and our blog on how we detect, remove and report CSAM.

How does Google identify CSAM on its platform?

We invest heavily in fighting child sexual exploitation online and use technology to deter, detect, and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs. We deploy hash matching, including YouTube’s CSAI Match, to detect known CSAM.  We also deploy machine learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Detection of never-before-seen CSAM helps the child safety ecosystem in a number of ways, including identifying child victims in need of safeguarding and contributing to the hashset to grow our abilities to detect known CSAM. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritize abuse content for human review. 


Both CSAI Match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms—please see here for more details.

What does Google do when it detects CSAM on its platform?

When we detect CSAM on our platforms, we remove it, make a “CyberTipline” report to NCMEC, and, depending on the nature and severity of the violation, we may provide a warning, limit an account’s access to certain Google products or services, or disable the Google Account. We may also apply further review of the Google Account to identify additional CSAM and/or to ensure we are taking relevant context into consideration to ensure we are taking appropriate and proportionate enforcement action in relation to the material identified in the Google Account.

 

NCMEC serves as a clearinghouse and comprehensive reporting center in the United States for issues related to child exploitation. Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world.

What is a CyberTipline report and what type of information does it include?

Once Google becomes aware of apparent CSAM, we make a report to NCMEC. These reports are commonly referred to as CyberTipLine reports, or CyberTips. In addition, we attempt to identify cases involving hands-on abuse of a minor, production of CSAM, or child trafficking. In those instances, we send a supplemental CyberTip report to NCMEC to help prioritize the matter. A report sent to NCMEC may include information identifying the user, the minor victim, and may include the violative content and/or other helpful contextual data.

Below are some examples of the real world impact of CyberTip reports Google has submitted. They provide a glimpse at the wide range of reports we make, but they are not comprehensive.  

  • A Google Cybertip reported numerous pieces of CSAM involving elementary school children taken in a classroom setting. Some of the reported CSAM was previously unidentified by Google and appeared to have been produced by the Google Account holder. NCMEC forwarded the report to law enforcement, which led to the identification and safeguarding of two minor children depicted in the reported CSAM imagery.  
  • A Google Cybertip reported the solicitation and production of CSAM by an account holder, who requested numerous videos to be made that depicted the hands-on-abuse of dozens of minor boys in exchange for money. NCMEC forwarded the report to law enforcement. The account holder was convicted for production of CSAM and several dozens of children were identified and safeguarded from ongoing abuse.
  • A Google CyberTip reported a single piece of known CSAM content that led to the apprehension of the account holder, who, according to law enforcement, was found to be in possession of much more CSAM and directly involved in the hands-on-abuse of minors in their care and providing those minors for others to abuse as well.  Due to the efforts by Google, NCMEC, and law enforcement, three children were rescued from sexual abuse.
  • A Google Cybertip reported CSAM that was produced by the Google account holder and solicited from minors the account holder had online access to. The account holder was later apprehended and determined by law enforcement to be in a position of trust as a medical professional: they used this position to abuse patients in their care and had direct access to minors online from whom they solicited the production of CSAM.

How does Google combat risks of CSAM in the Generative AI (GenAI) space?

AI-generated CSAM or computer-generated imagery depicting child sexual abuse is a threat Google takes very seriously. Our work to detect, remove, and report CSAM has always included violative content involving actual minors, modified imagery of an identifiable minor engaging in sexually explicit conduct, and computer-generated imagery that is indistinguishable from an actual minor engaging in such conduct.

Google places a heavy emphasis on child safety when creating our own GenAI models and products. We follow Google’s Responsible Generative AI principles in protecting all of Google’s publicly available models and the services built on top of these models.

We deploy a variety of child safety protections for our GenAI models and products. This can include protections against the presence of CSAM in the training data underlying our models, against CSAM-seeking and-producing prompts, and against violative outputs.  We also conduct robust child safety testing on our models prior to public launch to understand and mitigate the possibility of CSAM being generated.

We also work with others in the child safety ecosystem - including the Technology Coalition and child safety NGOs - to share and understand best practices as this technology continues to evolve.

What does Google do to deter users from seeking out CSAM on Search?

Google deploys safety by design principles to deter users from seeking out CSAM on Search. It's our policy to block search results that lead to child sexual abuse material that appears to sexually victimize, endanger, or otherwise exploit children. We are constantly updating our algorithms to combat these evolving threats. We always remove CSAM when it is identified and we demote all content from sites with a high proportion of CSAM content. We apply extra protections to searches that we understand are seeking CSAM content. We filter out explicit sexual results if the search query seems to be seeking CSAM, and for queries seeking adult explicit content, Search won’t return imagery that includes children, to break the association between children and sexual content. In many countries, users who enter queries clearly related to CSAM are shown a prominent warning that child sexual abuse imagery is illegal, with information on how to report this content to trusted organizations. When these warnings are shown, we have found that users are less likely to continue looking for this material.

How does Google contribute to the child safety ecosystem to combat CSAM?

Google’s child safety team builds technology that accurately detects, reports and removes CSAM to protect our users and prevent children from being harmed on Google products. We developed the Child Safety toolkit to ensure the broader ecosystem also has access to this powerful technology, and to help prevent online proliferation of child sexual abuse material. Additionally, we provide Google’s Hash Matching API to NCMEC to help them prioritize and review CyberTipline reports more efficiently, allowing them to hone in on those reports involving children who need immediate help. 

We also share child sexual abuse and exploitation signals to enable CSAM removal from the wider ecosystem. We share millions of CSAM hashes with NCMEC’s industry hash database, so that other providers can access and use these hashes as well. We also signed onto Project Lantern, a program that enables technology companies to share relevant signals to combat online sexual abuse and exploitation in a secure and responsible way, understanding that this abuse can cross various platforms and services.

We are also an active member of several coalitions, such as the Technology Coalition, the WeProtect Global Alliance, and INHOPE, that bring companies and NGOs together to develop solutions that disrupt the exchange of CSAM online and prevent the sexual exploitation of children. Google prioritizes participation in these coalitions, and in our work with NGOs like NCMEC and Thorn, we share our expertise, explore best practices, and learn more about the latest threats on key child safety issues.

How can government agencies send legal requests to Google associated with a CyberTip?

Once a report is received by NCMEC, they may forward it to law enforcement agencies around the world. Law enforcement may then send legal process to Google seeking further information (Law Enforcement Request System - LERS). In order to facilitate such requests, Google provides an online system that allows verified government agencies to securely submit requests for further information. These agencies can then view the status of submitted requests using this online system, and, ultimately, download Google’s response to their request. For more information about LERS or to set up a LERS account please visit lers.google.com. For more information, see our policies for how Google handles government requests for user information.

 How can I report suspected CSAM?

If you find a link, website, or any content that is CSAM, you can report it to the police, NCMEC, or an appropriate organization in your locale. If you see or experience inappropriate content or behavior towards children on Google’s products, there are many ways to report it to us, including by reporting child endangerment offenses (e.g. grooming, sextortion, other forms of sexual exploitation of children) happening on Google products. You can help prevent people from contacting your child on Google products and filter the content your child sees by managing their Google Account settings.

Which teams review CSAM reports?

Human review is a crucial part of our ongoing work to combat CSAM. Our team members bring deep expertise to this work with backgrounds in law, child safety and advocacy, social work, and cyber investigations, among other disciplines. They are specially trained on both our policy scope and what legally constitutes child sexual abuse material. Reviewer teams have specialized training and receive wellbeing support. To learn more about how Google approaches content moderation, including how we support reviewer wellness, see here.

What time period does this report cover?

Metrics presented here represent data gathered from 12:00am PST on January 1st to 11:59pm PDT on June 30th and 12:00am PDT on July 1st to 11:59pm PST on December 31st, unless otherwise specified.
Main menu
8088960706106249332
true
Search Help Center
true
true
true
false
false