Google’s Efforts to Combat Online Child Sexual Abuse Material FAQs

What is CSAM?

CSAM stands for child sexual abuse material. It consists of any visual depiction including, but not limited to, photos, videos and computer-generated imagery involving the use of a minor engaging in sexually explicit conduct. We recognise that CSAM is not the same thing as child nudity, and our policies and systems are specifically designed to recognise and distinguish between benign imagery like a child playing in the bathtub or back garden, which is not sexual in nature and not CSAM, from imagery that involves the sexual abuse of a child or lascivious exhibition of genitalia and intimate areas in violation of global laws. In addition, there are other types of obscene or exploitative imagery that include or represent children, such as cartoon depictions of child sexual abuse or attempts at humorous representations of child sexual exploitation. This imagery may also be a violation of global laws.

What is Google’s approach to combating CSAM?

Google is committed to fighting CSAM online and preventing our platforms from being used to create, store or distribute this material. We devote significant resources – technology, people and time – to detecting, deterring, removing and reporting child sexual exploitation content and behaviour. For more on our efforts to protect children and families, see the Google Safety Centre, YouTube’s Community Guidelines, our Protecting children site and our blog on how we detect, remove and report CSAM.

How does Google identify CSAM on its platform?

We invest heavily in fighting child sexual exploitation online and use technology to deter, detect and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs. We deploy hash matching, including YouTube’s CSAI Match, to detect known CSAM.  We also deploy machine learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Detection of never-before-seen CSAM helps the child-safety ecosystem in a number of ways, including identifying child victims in need of safeguarding and contributing to the hashset to grow our abilities to detect known CSAM. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritise abuse content for human review. 


Both CSAI Match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms. Please see here for more details.

What does Google do when it detects CSAM on its platform?

When we detect CSAM on our platforms, we remove it, make a 'CyberTipline' report to NCMEC and, depending on the nature and severity of the violation, we may provide a warning, limit an account’s access to certain Google products or services or disable the Google Account. We may also apply further review of the Google Account to identify additional CSAM and/or to ensure that we are taking relevant context into consideration to ensure that we are taking appropriate and proportionate enforcement action in relation to the material identified in the Google Account.

 

NCMEC serves as a clearing house and comprehensive reporting centre in the United States for issues related to child exploitation. Once a report has been received by NCMEC, they may forward it to law enforcement agencies around the world.

What is a CyberTipline report and what type of information does it include?

Once Google becomes aware of apparent CSAM, we make a report to NCMEC. These reports are commonly referred to as CyberTipLine reports or CyberTips. In addition, we attempt to identify cases involving hands-on abuse of a minor, production of CSAM or child trafficking. In those instances, we send a supplemental CyberTip report to NCMEC to help to prioritise the matter. A report sent to NCMEC may include information identifying the user and the minor victim and may include the violative content and/or other helpful contextual data.

Below are some examples of the real-world impact of CyberTip reports Google has submitted. They provide a glimpse at the wide range of reports that we make, but they are not comprehensive.  

  • A Google CyberTip reported numerous pieces of CSAM involving elementary school children taken in a classroom setting. Some of the reported CSAM was previously unidentified by Google and appeared to have been produced by the Google Account holder. NCMEC forwarded the report to law enforcement, which led to the identification and safeguarding of two minor children depicted in the reported CSAM imagery.  
  • A Google CyberTip reported the solicitation and production of CSAM by an account holder, who requested numerous videos to be made that depicted the hands-on abuse of dozens of minor boys in exchange for money. NCMEC forwarded the report to law enforcement. The account holder was convicted for the production of CSAM and many children were identified and safeguarded from ongoing abuse.
  • A Google CyberTip reported a single piece of known CSAM content that led to the apprehension of the account holder, who, according to law enforcement, was found to be in possession of much more CSAM and directly involved in the hands-on abuse of minors in their care and providing those minors for others to abuse as well.  Due to the efforts by Google, NCMEC and law enforcement, three children were rescued from sexual abuse.
  • A Google CyberTip reported CSAM that was produced by the Google Account holder and solicited from minors to which the account holder had online access. The account holder was later apprehended and determined by law enforcement to be in a position of trust as a medical professional. They used this position to abuse patients in their care and had direct access to minors online from whom they solicited the production of CSAM.

How does Google combat risks of CSAM in the generative AI (GenAI) space?

AI-generated CSAM or computer-generated imagery depicting child sexual abuse is a threat Google takes very seriously. Our work to detect, remove and report CSAM has always included violative content involving actual minors, modified imagery of an identifiable minor engaging in sexually explicit conduct and computer-generated imagery that is indistinguishable from an actual minor engaging in such conduct.

Google places a heavy emphasis on child safety when creating our own GenAI models and products. We follow Google’s responsible generative AI principles in protecting all of Google’s publicly available models and the services built on top of these models.

We deploy a variety of child safety protections for our GenAI models and products. This can include protections against the presence of CSAM in the training data underlying our models, against CSAM-seeking and -producing prompts and against violative outputs.  We also conduct robust child safety testing on our models prior to public launch to understand and mitigate the possibility of CSAM being generated.

We also work with others in the child safety ecosystem – including the Technology Coalition and child safety NGOs – to share and understand best practices as this technology continues to evolve.

What does Google do to deter users from seeking out CSAM on Search?

Google deploys safety by design principles to deter users from seeking out CSAM on Search. It's our policy to block search results that lead to child sexual abuse material that appears to sexually victimise, endanger or otherwise exploit children. We are constantly updating our algorithms to combat these evolving threats. We apply extra protections to searches that we understand are seeking CSAM content. We filter out explicit sexual results if the search query seems to be seeking CSAM, and for queries seeking adult explicit content, Search won’t return imagery that includes children, to break the association between children and sexual content. In addition to removing CSAM content from Search’s index when it is identified, we also demote all content from sites with a high proportion of CSAM content. In many countries, users who enter queries clearly related to CSAM are shown a prominent warning that child sexual abuse imagery is illegal, with information on how to report this content to trusted organisations. When these warnings are shown, we have found that users are less likely to continue looking for this material.

How does Google contribute to the child safety ecosystem to combat CSAM?

Google’s child-safety team builds technology that accurately detects, reports and removes CSAM to protect our users and prevent children from being harmed on Google products. We developed the Child Safety Toolkit to ensure that the broader ecosystem also has access to this powerful technology and to help prevent online proliferation of child sexual abuse material. Additionally, we provide Google’s Hash Matching API to NCMEC to help them prioritise and review CyberTipline reports more efficiently, allowing them to hone in on those reports involving children who need immediate help. 

We also share child sexual abuse and exploitation signals to enable CSAM removal from the wider ecosystem. We share millions of CSAM hashes with NCMEC’s industry hash database so that other providers can access and use these hashes as well. We also signed onto Project Lantern, a programme that enables technology companies to share relevant signals to combat online sexual abuse and exploitation in a secure and responsible way, understanding that this abuse can cross various platforms and services.

We are also an active member of several coalitions, such as the Technology Coalition, the WeProtect Global Alliance and INHOPE, that bring companies and NGOs together to develop solutions that disrupt the exchange of CSAM online and prevent the sexual exploitation of children. Google prioritises participation in these coalitions, and in our work with NGOs like NCMEC and Thorn, we share our expertise, explore best practices and learn more about the latest threats on key child safety issues.

How can government agencies send legal requests to Google associated with a CyberTip?

Once a report has been received by NCMEC, they may forward it to law enforcement agencies around the world. Law enforcement may then send legal process to Google seeking further information (Law Enforcement Request System – LERS). In order to facilitate such requests, Google provides an online system that allows verified government agencies to securely submit requests for further information. These agencies can then view the status of submitted requests using this online system, and, ultimately, download Google’s response to their request. For more information about LERS or to set up a LERS account, please visit lers.google.com. For more information, see our policies for how Google handles government requests for user information.

 How can I report suspected CSAM?

If you find a link, website or any content that is CSAM, you can report it to the police, NCMEC or an appropriate organisation in your area. If you see or experience inappropriate content or behaviour towards children on Google’s products, there are many ways to report it to us, including by reporting child endangerment offenses (e.g. grooming, sextortion, other forms of sexual exploitation of children) happening on Google products. You can help prevent people from contacting your child on Google products and filter the content your child sees by managing their Google Account settings.

Which teams review CSAM reports?

Human review is a crucial part of our ongoing work to combat CSAM. Our team members bring deep expertise to this work with backgrounds in law, child safety and advocacy, social work and cyber investigations, among other disciplines. They are specially trained on both our policy scope and what legally constitutes child sexual abuse material. Reviewer teams have specialised training and receive well-being support. To learn more about how Google approaches content moderation, including how we support reviewer wellness, see here.

What time period does this report cover?

Metrics presented here represent data gathered from 12.00 a.m. PST on 1 January to 11.59 p.m. PDT on 30 June and 12.00 a.m. PDT on 1 July to 11.59 p.m. PST on 31 December, unless otherwise specified.
Main menu
11184679486648283904
true
Search Help Centre
true
true
true
false
false