What is a content removal request?
Is this data comprehensive?
Do your statistics cover all categories of content removals?
How many of these requests resulted in the removal of content?
How is removal different from blocking services?
Do you ever remove content that violates local law without a court order or government request?
Why haven't you complied with all of the content removal requests?
There are many reasons we may not have removed content in response to a request. Some requests may not be specific enough for us to know what the government wanted us to remove (for example, no URL is listed in the request), and others involve allegations of defamation through informal letters from government agencies, rather than court orders. We generally rely on courts to decide if a statement is defamatory according to local law.
From time to time, we receive falsified court orders. We do examine the legitimacy of the documents that we receive, and if we determine that a court order is false, we will not comply with it. Here are some examples of fake court orders that we have received:
- We received a fake Canadian court order that demanded the removal of search results that link to three pages of the site forums.somethingawful.com. The fake order claimed that the site contained defamatory statements, but did not cite the law that was supposedly broken.
- We received a fake American court order that demanded the removal of a blog because it supposedly violated the copyrights of an individual by using her name in various blog posts.
- We received four fake Indian court orders [1, 2, 3, 4] that demanded the removal of blog posts and entire blogs for alleged defamation. The orders threatened to punish Google for failure to comply.
- We received four fake Peruvian court orders [1, 2, 3, 4] that demanded the removal of blog posts and entire blogs for alleged defamation. Two of the orders claimed to be issued from New York.
- We received five fake German court orders [1, 2, 3, 4, 5] that demanded removal of search results that were allegedly defamatory. These orders were created by private individuals pretending they were from different courts in Germany.
Where can I learn more about government requests for content removal?
Are the observations that you make about the data comprehensive and do they all relate to the same topics?
Why do there appear to be significantly more requests being made for reasons categorized as "Other" during the July–December 2010 reporting period?
Why is the compliance rate for government agency and law enforcement requests so much higher than those for court orders in many countries/regions?
Why do numbers of items requested to be removed from AdWords appear high until the beginning of 2012?
When we receive removal requests for AdWords, the requests typically only cite the URLs that allegedly violate the law or our policies. One URL can pertain to hundreds or thousands of ads. If we decide to remove ads in response to a request, we will look into the total number of ads that the request may affect.
Until the beginning of 2012, we counted the total number of ads removed (rather than the number of URLs or ads cited in the removal request). When we did not perform any removals in response to the request, we counted the number of URLs requested to be removed, so the number of items was lower.
When you remove content as the result of a legal request, do you limit the scope of removal to a particular geography or do you remove globally?
What is Google's position on the Santa Clara Principles?
What are the different types of requester categories?
There are 10 types of requester categories.
Data Protection Authority: Requests from government agencies that have jurisdiction over electronic/online service providers and regulate practices related to the personal information and privacy of citizens. Sometimes these agencies inquire about the status or outcome of a request from a data subject without taking a position on how the request should be handled; this kind of inquiry is not included in this category.
Consumer Protection Authority: Requests from government agencies that have legal authority to enforce competition and consumer protection law.
Police: Requests from government agencies that are responsible to enforce laws, address crime, and maintain public safety.
Information and Communications Authority: Requests from government agencies tasked with regulating information, media and/or telecommunication sectors. In some countries, these agencies are tasked with identifying and reporting illegal content.
Military: Requests from the armed forces, which excludes police and intelligence agencies.
Government Officials: Requests of a personal nature from government officials acting on their own behalf. This includes elected officials (past or present) and candidates for political office.
Court Order Directed at Google: Court orders that list Google as the defendant.
Court Order Directed at 3rd Party: Court orders that do not list Google as the defendant, but which declare that certain content is unlawful.
Suppression Orders: Court orders that prohibit any discussion of the order, and may even prohibit the existence of the order being disclosed.
Other: Government agencies and court orders that do not fall under any of the other categories.
We did not begin providing detailed data on government requester type until 2019. Before 2019, we provided data only on the branch of government the requester belonged to.
Under the previous categorization, the Executive Branch reflects requests from Data Protection Authorities, Consumer Protection Authorities, Police, Military, Information and Communications Authorities, Government Officials and other types of government agencies.
Under the previous categorization, the Judicial Branch reflects requests relating to court orders directed at Google and/or 3rd parties, suppression orders, and other types of court orders.
What are the ways YouTube may restrict an item rather than remove it?
Age-restricted. Some videos don't violate our Community Guidelines, but may not be appropriate for all audiences. In these cases, the video might be placed under an age restriction when we’re notified of the content. Age-restricted videos are not visible to users who are logged out, are under 18 years of age, or have Restricted Mode enabled. When we make this decision, we notify the uploader by email that their video has been age-restricted and they can appeal this decision. Learn more.
Limited features. If our Community Guidelines review teams determine that a video is borderline under our policies, it may have some features disabled. These videos will remain available on YouTube but will be placed behind a warning message, and some features will be disabled, including sharing, commenting, liking, and placement in suggested videos. These videos are also not eligible for monetization. When we make this decision, we notify the uploader by email that their video will only have limited features and they can appeal this decision. Learn more.
Locked as private. If a video is identified as violating our policy on misleading metadata, it may be locked as private. When a video is locked as private, it will not be visible to the public. If a viewer has a link to the video, it will appear as unavailable. When we make this decision, we notify the uploader by email that their video is no longer public and they can appeal this decision. Learn more.
The above actions to restrict videos are not included in the report at this time.
How do you define reasons for removal requests?
Requests categorized as “Defamation” are requests that relate to harm to reputation. This may include, but is not limited to claims of libel, slander, and corporate defamation.
Requests categorized as “Copyright” are requests related to alleged copyright infringement, received under notice and takedown laws such as the U.S. Digital Millennium Copyright Act.
Requests categorized as “Regulated goods and services” are related to claims of infringement of various local laws of a country. This may include, but is not limited to: illegal sale/trade/advertising of pharmaceuticals, alcohol, tobacco, fireworks, weapons, gambling, prostitution and/or health and medical devices or services.
Requests categorized as “Privacy and security” are related to claims of violations of an individual user's privacy or personal information. This may include, but is not limited to: identity theft, hacking, unwanted disclosure of personal information, non-consensual explicit imagery, or requests based on privacy laws.
Requests categorized as “Bullying/harassment” are related to claims of intentional behavior that is deemed threatening or disturbing by the victim.
Requests categorized as “Business complaints” are related to claims regarding content that is allegedly illegal because it promotes unfair competition or criticizes a business competitor for the sake of obtaining market share.
Requests categorized as “Electoral law” are related to claims of violation of local law about how elections work and/or what can be said about candidates.
Requests categorized as “Fraud” are related to claims of financial fraud. This may include, but is not limited to, claims of employment scam and fraudulent financial activity.
Requests categorized as “Geographical dispute” are related to content that is allegedly illegal because of claims that a border is being displayed a certain way. This may include, but is not limited to, complaints about names of islands, seas, and other geographical features.
Requests categorized as “Government criticism” are related to claims of criticism of government policy or politicians in their official capacity.
Requests categorized as “Impersonation” are related to claims that involve the malicious usurpation of identity to harm the reputation of a victim. This may include, but is not limited to, claims of hacked accounts and stolen identity.
Requests categorized as “Trademark” are related to claims of trade dress and/or distinctive marks. This includes, but is not limited to, claims of counterfeit and trademark.
Requests categorized as “Religious offense” are related to laws designed to protect the reputation of religious figures. This may include, but is not limited to, claims of blasphemy, “unholy” depictions, and disputes between religious groups.
Requests categorized as “Drug Abuse” are related to claims of content that is alleged to be illegal because it depicts drugs or how to use them. This may include, but is not limited to, claims of drug cultivation, drug use techniques, and content glorifying drug use.
Requests categorized as “Nudity/Obscenity” are related to claims of content that is not porn, but may violate laws surrounding nudity. This may include, but is not limited to, claims of lewd depictions, naked or topless photographs, and indecency.
Requests categorized as “Pornography” are related to claims of sexually explicit content.
Requests categorized as “Suicide” are related to claims of content that either depicts or promotes suicide.
Requests categorized as “Violence” are related to claims of intentional use of physical force or power to do harm to living beings. This may include, but is not limited to, claims of animal abuse.
Requests categorized as “Hate speech” are related to claims of incitements to violence against protected groups or racial slurs. This may include, but is not limited to, claims of Nazi propaganda and anti-semitic or other racist content.
What are the different types of removal percentages categories?
Removed - Legal: Items removed for legal reasons.
Removed - Policy: Items removed for violating Google or YouTube Terms of Service and/or Community Guidelines.
Content Not Found: The allegedly infringing content cannot be found in the specified item or location.
Not Enough Information: A decision could not be made because Google or YouTube required more information to process the request. For example, the requester supplied an incomplete item or did not provide a reason for requesting an item be removed.
No Action Taken - Other: Item was not removed. This category also includes duplicate items. Prior to 2020, due to data tracking limitations, in certain instances, we were unable to capture granularity for some non-removal actions. Hence non-removal actions are reported under this category.
Content Already Removed: Item was previously removed in another request.
Prior to 2019, we published “Removal percentages” based on action taken on requests and not items. From 2019 onwards, we publish removal percentages based on action taken per item.
Under previous categorization, “Action Taken” reflects “Removed - Legal” and “Removed - Policy” categories.Under previous categorization, “No Action Taken” reflects “Content Not Found”, “Not Enough Information”, “No Action Taken - Other”, and “Content Already Removed” categories.