Government requests to remove content FAQs

 

What is a content removal request?

Governments make content removal requests to remove information from Google products, such as blog posts or YouTube videos. The data includes court orders sent to us to remove content, regardless of whether the court order is directed at Google. For the purposes of this report, we also count government requests that we review to determine if particular content should be removed for violating a product's community guidelines or content policies.
Please note that we count as a new request any follow-up to review additional items pertaining to the same matter. In practice, this means that when a government authority or a court requests us to review a new item on a different day, it is reflected in our data as an additional request.

Is this data comprehensive?

There are limits to what this data can tell us. There may be multiple requests that ask for the removal of the same piece of content. In addition, in the first two reporting periods we haven't released specific numbers for countries/regions that issued fewer than 10 requests, and that requested the removal of fewer than 10 items, due to technical constraints specific to those reporting periods. Similarly, if a government agency used a web form where we can't identify the party reporting the request to remove content, we generally have no way of including those reports in our statistics.

Do your statistics cover all categories of content removals?

No. Our policies and systems are set up to identify and remove child sexual abuse imagery whenever we become aware of it, regardless of whether that request comes from the government. As a result, it's difficult to accurately track which of those removals were requested by governments, and we haven't included those statistics here. We counted requests for removal of all other types of content (e.g., alleged defamation, hate speech, impersonation).

How many of these requests resulted in the removal of content?

The "removal request" numbers represent the number of requests we have received per country/region; the percentage of requests in response to which we removed content; and the number of individual items of content requested to be removed.

How is removal different from blocking services?

Some governments and government agencies choose to block specific services as a means of controlling access to content in their jurisdiction. The content removal numbers we've reported do not include any data on government-mandated service blockages. Our Traffic graphs show you when Google services have been inaccessible.

Do you ever remove content that violates local law without a court order or government request?

Yes. The statistics we report here do not include content removals that we regularly process every day in response to non-governmental user complaints across our products for violation of our content policies or community guidelines (for example, we do not permit hate speech in Blogger and other similar products). In many cases these requests result in the removal of material that violates local law, independent of any government request or court order seeking such removal.

Why haven't you complied with all of the content removal requests?

There are many reasons we may not have removed content in response to a request. Some requests may not be specific enough for us to know what the government wanted us to remove (for example, no URL is listed in the request), and others involve allegations of defamation through informal letters from government agencies, rather than court orders. We generally rely on courts to decide if a statement is defamatory according to local law.

From time to time, we receive falsified court orders. We do examine the legitimacy of the documents that we receive, and if we determine that a court order is false, we will not comply with it. Here are some examples of fake court orders that we have received:

  • We received a fake Canadian court order that demanded the removal of search results that link to three pages of the site forums.somethingawful.com. The fake order claimed that the site contained defamatory statements, but did not cite the law that was supposedly broken.
  • We received a fake American court order that demanded the removal of a blog because it supposedly violated the copyrights of an individual by using her name in various blog posts.
  • We received four fake Indian court orders [1234] that demanded the removal of blog posts and entire blogs for alleged defamation. The orders threatened to punish Google for failure to comply.
  • We received four fake Peruvian court orders [1234] that demanded the removal of blog posts and entire blogs for alleged defamation. Two of the orders claimed to be issued from New York.
  • We received five fake German court orders [12345] that demanded removal of search results that were allegedly defamatory. These orders were created by private individuals pretending they were from different courts in Germany.

Where can I learn more about government requests for content removal?

There are several independent organizations that release regular reports about government requests for information and content removal, including Lumen and the Open Net Initiative

Are the observations that you make about the data comprehensive and do they all relate to the same topics?

These observations are meant to highlight certain requests that we have received during each reporting period, along with some trends that we've noticed in the data, and are by no means exhaustive.

Why do there appear to be significantly more requests being made for reasons categorized as "Other" during the July–December 2010 reporting period?

Prior to the January–June 2011 reporting period we were not tracking the reasons for removal requests at a very granular level. As a result of this, many requests were classified as "Other" instead of something more specific.

Why is the compliance rate for government agency and law enforcement requests so much higher than those for court orders in many countries/regions?

Google carefully evaluates each and every request we receive, including those that include court orders. Individuals who request the removal of content often submit court orders with their requests in order to provide supporting evidence for their claim. In many cases, these court orders do not compel Google to take any action. Rather, they are the result of a dispute with a third party where a court has determined that specific content has been deemed illegal. We also often receive claims with court orders that are forgeries or are not sufficiently specific.

Why do numbers of items requested to be removed from AdWords appear high until the beginning of 2012?

When we receive removal requests for AdWords, the requests typically only cite the URLs that allegedly violate the law or our policies. One URL can pertain to hundreds or thousands of ads. If we decide to remove ads in response to a request, we will look into the total number of ads that the request may affect.

Until the beginning of 2012, we counted the total number of ads removed (rather than the number of URLs or ads cited in the removal request). When we did not perform any removals in response to the request, we counted the number of URLs requested to be removed, so the number of items was lower.

When you remove content as the result of a legal request, do you limit the scope of removal to a particular geography or do you remove globally?

Legal standards vary greatly by country/region. Content that violates a specific law in one country/region may be legal in others. Typically, we remove or restrict access to the content only in the country/region where it is deemed to be illegal. Sometimes, a court’s decision can be evidence that is useful for assessment in a different country; for instance, if a court finds content to be defamatory or harassing after giving the person who authored the content an opportunity to defend their speech, we may remove in other countries the requester can demonstrate a meaningful connection to. When content is found to violate our Community Guidelines or Terms and Conditions, however, we remove or restrict access globally.

What is Google's position on the Santa Clara Principles?

Google supports the spirit of the Santa Clara Principles as an effort to help shape how companies across our industry can think about transparency for action taken on content.  

What are the different types of requester categories?

There are 10 types of requester categories.

Data Protection Authority: Requests from government agencies that have jurisdiction over electronic/online service providers and regulate practices related to the personal information and privacy of citizens. Sometimes these agencies inquire about the status or outcome of a request from a data subject without taking a position on how the request should be handled; this kind of inquiry is not included in this category.

Consumer Protection Authority: Requests from government agencies that have legal authority  to enforce competition and consumer protection law.  

Police: Requests from government agencies that are responsible to enforce laws, address crime, and maintain public safety.

Information and Communications Authority: Requests from government agencies tasked with regulating information, media and/or telecommunication sectors. In some countries, these agencies are tasked with identifying and reporting illegal content.

Military: Requests from the armed forces, which excludes police and intelligence agencies.

Government Officials: Requests of a personal nature from government officials acting on their own behalf. This includes elected officials (past or present) and candidates for political office.

Court Order Directed at Google: Court orders that list Google as the defendant. 

Court Order Directed at 3rd Party: Court orders that do not list Google as the defendant, but which declare that certain content is unlawful. 

Suppression Orders: Court orders that prohibit any discussion of the order, and may even prohibit the existence of the order being disclosed. 

Other: Government agencies and court orders that do not fall under any of the other categories. 

We did not begin providing detailed data on government requester type until 2019. Before 2019, we provided data only on the branch of government the requester belonged to. 

Under the previous categorization, the Executive Branch reflects requests from Data Protection Authorities, Consumer Protection Authorities, Police, Military, Information and Communications Authorities, Government Officials and other types of government agencies. 

Under the previous categorization, the Judicial Branch reflects requests relating to court orders directed at Google and/or 3rd parties, suppression orders, and other types of court orders.

What are the ways YouTube may restrict an item rather than remove it?

Age-restricted. Some videos don't violate our Community Guidelines, but may not be appropriate for all audiences. In these cases, the video might be placed under an age restriction when we’re notified of the content. Age-restricted videos are not visible to users who are logged out, are under 18 years of age, or have Restricted Mode enabled. When we make this decision, we notify the uploader by email that their video has been age-restricted and they can appeal this decision. Learn more.

Limited features. If our Community Guidelines review teams determine that a video is borderline under our policies, it may have some features disabled. These videos will remain available on YouTube but will be placed behind a warning message, and some features will be disabled, including sharing, commenting, liking, and placement in suggested videos. These videos are also not eligible for monetization. When we make this decision, we notify the uploader by email that their video will only have limited features and they can appeal this decision. Learn more.

Locked as private. If a video is identified as violating our policy on misleading metadata, it may be locked as private. When a video is locked as private, it will not be visible to the public. If a viewer has a link to the video, it will appear as unavailable. When we make this decision, we notify the uploader by email that their video is no longer public and they can appeal this decision. Learn more.

Demonetize. If a video does not meet our advertiser-friendly content guidelines, the video can be put under a “Limited or no ads” restriction. 

The above actions to restrict videos are not included in the report at this time.

How do you define reasons for removal requests?

Requests categorized as “National security” relate to claims of threats to security on a larger-than-individual scale. This may include, but is not limited to claims of terrorism, extremism, threats to nation-states, breaches of federal/state security etc.

Requests categorized as “Defamation” are requests that relate to harm to reputation. This may include, but is not limited to claims of libel, slander, and corporate defamation.

Requests categorized as “Copyright” are requests related to alleged copyright infringement, received under notice and takedown laws such as the U.S. Digital Millennium Copyright Act.

Requests categorized as “Regulated goods and services” are related to claims of infringement of various local laws of a country. This may include, but is not limited to: illegal sale/trade/advertising of pharmaceuticals, alcohol, tobacco, fireworks, weapons, gambling, prostitution and/or health and medical devices or services.

Requests categorized as “Privacy and security” are related to claims of violations of an individual user's privacy or personal information. This may include, but is not limited to: identity theft, hacking, unwanted disclosure of personal information, non-consensual explicit imagery, or requests based on privacy laws.

Requests categorized as “Bullying/harassment” are related to claims of intentional behavior that is deemed threatening or disturbing by the victim.

Requests categorized as “Business complaints” are related to claims regarding content that is allegedly illegal because it promotes unfair competition or criticizes a business competitor for the sake of obtaining market share.

Requests categorized as “Electoral law” are related to claims of violation of local law about how elections work and/or what can be said about candidates.

Requests categorized as “Fraud” are related to claims of financial fraud. This may include, but is not limited to, claims of employment scam and fraudulent financial activity.

Requests categorized as “Geographical dispute” are related to content that is allegedly illegal because of claims that a border is being displayed a certain way. This may include, but is not limited to, complaints about names of islands, seas, and other geographical features.

Requests categorized as “Government criticism” are related to claims of criticism of government policy or politicians in their official capacity.

Requests categorized as “Impersonation” are related to claims that involve the malicious usurpation of identity to harm the reputation of a victim. This may include, but is not limited to, claims of hacked accounts and stolen identity.

Requests categorized as “Trademark” are related to claims of trade dress and/or distinctive marks. This includes, but is not limited to, claims of counterfeit and trademark.

Requests categorized as “Religious offense” are related to laws designed to protect the reputation of religious figures. This may include, but is not limited to, claims of blasphemy, “unholy” depictions, and disputes between religious groups.

Requests categorized as “Drug Abuse” are related to claims of content that is alleged to be illegal because it depicts drugs or how to use them. This may include, but is not limited to, claims of drug cultivation, drug use techniques, and content glorifying drug use.

Requests categorized as “Nudity/Obscenity” are related to claims of content that is not porn, but may violate laws surrounding nudity. This may include, but is not limited to, claims of lewd depictions, naked or topless photographs, and indecency.

Requests categorized as “Pornography” are related to claims of sexually explicit content.

Requests categorized as “Suicide” are related to claims of content that either depicts or promotes suicide.

Requests categorized as “Violence” are related to claims of intentional use of physical force or power to do harm to living beings. This may include, but is not limited to, claims of animal abuse.

Requests categorized as “Hate speech” are related to claims of incitements to violence against protected groups or racial slurs. This may include, but is not limited to, claims of Nazi propaganda and anti-semitic or other racist content.

What are the different types of removal percentages categories?

Removed - Legal: Items removed for legal reasons.  

Removed - Policy: Items removed for violating Google or YouTube Terms of Service and/or Community Guidelines. 

Content Not Found: The allegedly infringing content cannot be found in the specified item or location.  

Not Enough Information: A decision could not be made because Google or YouTube required more information to process the request. For example, the requester supplied an incomplete item or did not provide a reason for requesting an item be removed.

No Action Taken - Other: Item was not removed. This category also includes duplicate items. Prior to 2020, due to data tracking limitations, in certain instances, we were unable to capture granularity for some non-removal actions. Hence non-removal actions are reported under this category. 

Content Already Removed: Item was previously removed in another request. 

Prior to 2019, we published “Removal percentages” based on action taken on requests and not items. From 2019 onwards, we publish removal percentages based on action taken per item. 

Under previous categorization, “Action Taken” reflects “Removed - Legal” and “Removed - Policy” categories. 

Under previous categorization, “No Action Taken” reflects “Content Not Found”, “Not Enough Information”, “No Action Taken - Other”, and “Content Already Removed” categories.
Main menu
15330455308408503819
true
Search Help Center
true
true
true
false
false