YouTube Community Guidelines enforcement FAQs

What are YouTube's Community Guidelines?

When a user joins YouTube, they join a community of people from all over the world. We have detailed Community Guidelines that outline what is not allowed on YouTube. For example, we do not allow pornography, incitement to violence, harassment or hate speech. Following these guidelines helps to keep YouTube a place where our community can thrive

Who can flag content for potential violations of YouTube's Community Guidelines?

Any logged-in user can flag a video by clicking on the three dots near the video player and selecting 'Report'. Our human reviewers carefully review flagged content 24 hours a day, 7 days a week to determine whether there’s a violation of YouTube's Community Guidelines. If a video is deemed to violate our policies, it is removed or restricted, and the creator is notified by email and given an opportunity to appeal the decision (except if the removal was on privacy grounds).

What are the reasons users can select when flagging videos for possible YouTube Community Guidelines violations?

In order to flag a video or livestream, an individual must choose from a list of reasons that the video may violate our Community Guidelines – e.g. sexual content, promoting terrorism or hateful or abusive content. Our human reviewers then evaluate the flagged video for violation of any of YouTube's Community Guidelines, not just the reason identified by the flagger. Videos flagged by users are only removed or restricted if we review the content and find that it violates our Community Guidelines. Learn more about how users flag content on YouTube.

What happens to a video that violates YouTube’s Community Guidelines?

Content that violates our Community Guidelines is removed from the site. A channel will receive a warning the first time that it posts content that violates our guidelines. The next violation will result in a strike. The creator has a right to appeal our decision (except if the video was removed on privacy grounds after a thorough internal review), which generates a re-review and a final decision.

What happens if a channel receives a strike?

If a creator’s channel receives a Community Guidelines strike, the creator will get an email and see an alert in the Channel Settings with information about why the content was removed. Creators will also see a notification about the removal the next time that they access their YouTube channel on a desktop computer or mobile app. While the creator has a single strike on the channel, they won’t be able to post new content for a week – this includes videos, livestreams, stories, customised thumbnails and posts. After a second strike, the creator won’t be able to post any content for two weeks. Strikes remain on the channel for 90 days. If the creator’s channel receives three Community Guidelines strikes within 90 days, the channel will be terminated. The creator can appeal individual strikes or channel termination. Learn more

How does YouTube assign a video’s removal reason? 

When our trained team of reviewers determines that a flagged video violates our Community Guidelines, they assign a reason for removing the video based on our policies and enforcement criteria. In cases where a video violates more than one of our guidelines, the reviewer assigns the removal reason based on which policy violation is most severe. When multiple severe violations are present, our reviewers assign a removal reason depending on which policy violation is most obvious or indicative of the uploader's intent. If a video is removed by our technology for violating our policies against spam, it is assigned to the spam category. When a video is removed by our technology for being a re-upload of content that we’ve already reviewed and determined violates our policies, we assign it to the same removal reason as the original video, where available. In other cases when our automated systems flag content, a reviewer assesses the content and assigns a removal reason. 

Why do some videos not have an assigned removal reason?

Earlier in YouTube’s enforcement history, reviewers only logged the necessary enforcement action (i.e. strike or age-gate) when reviewing a video, not the reason for removal. Now, when our technology prevents re-uploading an exact match of those violative videos, or when reviewers use an older enforcement tool to remove a video, there is no policy reason associated with it. Accordingly we’ve classified these videos as 'Other'. We are working to reduce the amount of content in this category, including by filling in the policy reasons for removing older content.  

How do you determine a video’s country/region of upload?

This data is based on the uploader’s IP address at the time of upload. The IP address usually corresponds with where an uploader is geolocated, unless they are using a virtual private network (VPN) or proxy server. Thus, this data does not distinguish between whether an uploader is geolocated in a given country/region, or using a VPN or proxy server based in that country/region. Also note that the uploader’s IP address does not necessarily correspond with the location where the video was viewed or the location from which a video was flagged (if flagged by a user or Priority Flagger). 
 

Our Community Guidelines are designed to be enforced at a global level. The uploader’s IP address does not factor into decisions to remove content for policy violations. For information about content removals based on local laws, see Google’s Government requests to remove content transparency report.

What leads to a channel-level takedown?

Channel-level takedowns are the consequence of violating our Community Guidelines three-strikes policy, a single case of severe abuse (such as predatory behaviour), or accounts dedicated to a policy violation (such as impersonation). When a channel is terminated, all of its videos are removed. When an account is terminated, the account owner receives an email detailing the reason for the termination. If a user believes their account has been terminated in error, they can appeal the decision.

Is flagged content automatically removed?

YouTube only takes action on videos flagged by users after review by our trained human reviewers to ensure that the content does indeed violate our policies and to protect content that has an educational, documentary, scientific or artistic purpose. However, we do use technology to identify and remove spam automatically, as well as re-uploads of content that we’ve already reviewed and determined violates our policies. In addition, there are certain cases where we may not remove the video altogether, but may disable certain features like commenting or limit the audience to signed-in users over the age of 18. Creators will be notified of enforcement on their videos and can submit an appeal if they believe that we made a decision in error. 

What are the ways that YouTube may restrict a video rather than remove it?

  • Age-restricted. Some videos don't violate our policies, but may not be appropriate for all audiences. In these cases, our review team will place an age restriction on the video when we're notified of the content. Age-restricted videos are not visible to users who are logged out, are under 18 years of age, or have Restricted mode enabled. When we make this decision, we notify the uploader by email that their video has been age-restricted and that they can appeal this decision. Learn more.

  • Limited features. If our review teams determine that a video is borderline under our policies, it may have some features disabled. These videos will remain available on YouTube but will be placed behind a warning message, and some features will be disabled, including sharing, commenting, liking and placement in suggested videos. These videos are also not eligible for monetisation. When we make this decision, we notify the uploader by email that their video will only have limited features and they can appeal this decision. Learn more.

  • Locked as private. If a video is identified as violating our policy on misleading metadata, it may be locked as private. When a video is locked as private, it will not be visible to the public. If a viewer has a link to the video, it will appear as unavailable. When we make this decision, we notify the uploader by email that their video is no longer public and they can appeal this decision. Learn more.

The above actions to restrict videos are not included in the report at this time.

What are actions YouTube or creators may take on comments, short of removing them?

YouTube also provides creators with several options for controlling comments on their videos and channels, including:

  • Disable comments. Creators can decide if they want to turn comments on or off on specific videos. Learn more

  • Set up comment filters. Creators can set up filters to help them manage new comments and messages, including selecting users whose comments will always be approved or blocked, and adding a list of ‘blocked words’ that will filter any new comments that include those words into the creator’s 'Held for Review' queue. A filter can also hold comments that include links for creator review. Learn more.

  • Moderate comments. Creators can choose from several options to moderate their videos and channels, either by individual video or across their channel. Options include to hold & review all new comments before they're posted to their video or channel, or to hold potentially inappropriate comments for review. If creators choose to opt in, comments will appear in their 'Held for Review' queue. Creators have the final decision whether to approve, hide or report these comments. Learn more.

The above actions to moderate comments are not included in the report at this time, which only includes data on comments that YouTube removed for violating our policies or filtered as ‘likely spam’. The report also does not include comments removed when YouTube disables the comment section on a video, when a video itself is removed individually or through a channel-level suspension, or when a comment poster’s account is terminated.

Who reviews flags to make decisions about removing or restricting content?

Trained human reviewers evaluate flagged videos in order to ensure that they actually violate our policies and to protect content that has an educational, documentary, scientific or artistic purpose. These teams are located in countries around the world, are fluent in multiple languages and carefully evaluate flags 24 hours a day, seven days a week. Reviewers have extensive training in YouTube’s Community Guidelines and often specialise in specific policy areas such as child safety or hate speech. They remove content that violates our terms, restrict content that may not be appropriate for all audiences, and are careful to leave content up when it doesn’t violate our guidelines.

How are the human reviewers trained?

YouTube’s human reviewers go through a comprehensive training programme to ensure that they have a full understanding of YouTube's Community Guidelines – including how to spot and protect content that has an educational, documentary, scientific or artistic purpose. The training is a mix of classroom and online curriculum. To ensure a rich learning experience and knowledge retention, we use frequent tests as part of the training process. A continuous quality assurance programme assesses decisions made by the reviewers and identifies improvement opportunities. When mistakes happen– either via human error or glitches in our technological systems– we work to rectify the error, carefully analyse what happened and put measures in place to prevent similar errors in the future.

Do human reviewers ever make mistakes?

Yes. YouTube’s human reviewers are highly trained but, as with any system, reviewers can sometimes make mistakes, and that’s why we allow appeals.

What happens from a creator's point of view when a Community Guidelines policy removal or restriction is applied?

Creators are notified of the removal or restriction over email and on their Channel Settings, and we provide a link to appeal (except if the video was removed on privacy grounds). Creators will also see a notification about the removal the next time that they access their YouTube channel on a desktop computer or mobile app. If a creator chooses to submit an appeal, it goes to human review, and the decision is either upheld or reversed. The creator receives a follow-up email with the result.

How else is content identified as violating YouTube's Community Guidelines?

YouTube has long used a mix of humans and technology to enforce its policies. In addition to user flags, we use technology to flag content that may violate our Community Guidelines. This content is generally sent through to trained reviewers to evaluate for potential violations, unless we have a high degree of certainty that the content violates our policies. For example, we have tools to automatically detect and remove spam at scale, based on specific signals that we can confidently associate with abusive practices. We also automatically remove re-uploads of content that we’ve already reviewed and determined violates our policies.

How does automated flagging work?

YouTube’s automated flagging systems start working as soon as a user attempts to publish a video or post a comment. The content is scanned by machines to assess whether it may violate YouTube’s Community Guidelines. YouTube also utilises automated systems to prevent re-uploads of known violative content, including through the use of hashes (or 'digital fingerprints').

What data is and is not included in the YouTube Community Guidelines enforcement report?

This report includes data on human and automated flags recorded on video content for possible violations of YouTube’s Community Guidelines. In December 2018 we added data on channel terminations, as well as comment removals. Data on channel and video removal reasons are included from September 2018 forward. Other content on YouTube can also be flagged, including comments and playlists, but we are not including that in this report at this time. This report also excludes legal removals, which we share in the Government requests to remove content report, and removals for violations of privacy and copyright.

Is every flag included in this report?

From October-December 2017 (the time period covered by the first report), we received over 30 million human flags on videos. To prevent abuse of our flagging systems, YouTube has systems in place that identify things such as suspicious or exceptionally high flagging volumes. Flags which fall outside of these bounds are excluded from the data presented in this report. These systems are adjusted from time to time to ensure that they are effective against abuse. This is one of many factors that drive changes to quarter-by-quarter flagging volumes. In addition, legal, copyright and privacy violations are excluded from this report, as are flags on content other than videos.

What is Priority Flagger?

Priority Flagger is a programme that we developed to enable highly effective flaggers to alert us to content that violates our Community Guidelines via a bulk reporting tool. NGOs and government agencies are eligible to participate in this programme. As part of the onboarding process, all members are provided with training on YouTube’s Community Guidelines. Because participants’ flags have a higher action rate than the average user, we prioritise them for review. Content flagged by Priority Flaggers is not automatically removed nor evaluated against any different policies than user flags. Videos flagged by Priority Flaggers are subject to the same policies as videos flagged by any other user and are manually reviewed by our human reviewers, who are trained to make decisions on whether content violates our Community Guidelines and should be removed. Learn more.

Are there other ways to report inappropriate content?

If the flagging process does not accurately capture a user’s issue, we have a number of additional reporting mechanisms available for use.

  • Safety and Abuse Reporting tool. Sometimes an individual may need to report more than one piece of content or may wish to submit a more detailed report for review. The Safety and Abuse Reporting tool can be used to highlight a user’s comments, videos and provide additional information about a concern. If a user feels that he or she has been targeted for abuse, this tool is the best option to report content.

  • Privacy complaints. An individual can use the privacy complaint process to report privacy complaints.

  • Legal issues (including copyright). An individual may report a legal issue on behalf of him or herself or a client via our legal web forms.

  • Critical injury or death. We do our best to respect the wishes of families in regards to footage of their loved ones being critically injured. Removal requests of such content may be submitted through our web form.

What causes an update to historical report data?

We are committed to providing transparency through this report. In some cases, we may discover an error in the data after we have posted it. When it materially affects the data that we have released in a previous quarter, we will correct the report’s historical data and include a note about the change. 

How is Violative View Rate (VVR) calculated?

We first take a sample of all videos that have been viewed on YouTube. The videos in that sample are then sent for review, and our teams determine whether each video does or does not violate our community guidelines. We then use the aggregate results to estimate the proportion of views on YouTube that violate our community guidelines. The VVR metric is reported with a 95% confidence interval. This means that if we performed the measurement many times for the same time period, we would expect the true metric to lie within the interval 95% of the time. The confidence intervals do not take into account rater quality, which may impact our measurements. We evaluate the quality of our raters on a regular basis to ensure a high accuracy of review decisions.

Prior to Q1 2019, VVR was based on an aggregate of reviews for only the final 28 days of a given quarter (e.g. the Q4 2018 is represented by the aggregate views from 3 December 2018 to 31 December 2018) and should be considered an end-of-quarter measurement. From Q2 2019 forward, we aggregate reviews across the entire quarter. During the last two weeks of Q1 and first two weeks of Q2 2020, our rater capacity for VVR reviews was constrained due to COVID-19 and we were not able to review all samples, so the data from a portion of these periods may not be representative.

Which policies are included in VVR?

Violative View Rate measures the proportion of views on videos that should have been removed by YouTube for violating our community guidelines. The metric does not include non-violative videos that were removed as a result of channel removal, and we omit spam from the metric altogether because spam channel removals make up the majority of spam removals. VVR also excludes livestreams, but includes livestreams which have been converted to on-demand videos.
Main menu
8763399014052773547
true
Search Help Centre
true
true
true
false
false