Content is removed or restricted on YouTube when it’s found to violate one of our policies, such as our Community Guidelines, or when it violates a law. To determine whether content is violative, we use a combination of automated systems and human reviews.
Our automated systems use machine learning, which allows them to use data from previous human reviews to identify potentially violative content.
Most of our systems are continuously supplied with millions of data points from human reviews. This means our automated systems can offer a high level of accuracy in detecting violations. Automated systems also provide efficient response times to our users for the high volume of content that YouTube receives.
When our systems have a high degree of confidence that content is violative, they may make an automated decision. However, in the majority of cases, our automated systems will simply flag content to a trained human reviewer for evaluation before any action is taken.
When a human reviewer checks potentially violative content, it means a trained human evaluates the content and makes a decision based on the relevant policy or law.
If content is found to be violative, our human reviewers may remove content or age-restrict it if it’s not appropriate for all audiences. If the content has an educational, documentary, scientific, or artistic purpose, we may allow it to remain on YouTube.
After a content decision is made, if the decision is appealed, a human will review the appeal and evaluate it on a case-by-case basis.
If you think our automated systems or human reviewers made a mistake or if you disagree with a content decision, learn about your resolution options.
Frequently asked questions (FAQs)
Why does YouTube use automated systems to review content?
Every minute, hundreds of hours of new content is uploaded to YouTube. Because of this, automation is necessary to efficiently manage this huge amount of content while providing decisions in a timely manner to our users.
Keep in mind that automation is only used in cases where our systems have a high degree of confidence that content is violative. Otherwise, potentially violative content gets flagged for one of our trained human reviewers to evaluate.
What is machine learning?
Machine learning is a type of artificial intelligence (AI) that allows computers to perform complex tasks in a way that’s similar to how humans perform tasks. To do this, machine learning uses large sets of data to train computers to recognize patterns and learn the actions to take in different situations.
What happens after content is reviewed?
After your content is reviewed and a decision is made, you'll get an email from YouTube that explains the decision and the relevant policy or law that the content violated.
If you think our automated systems or human reviewers made a mistake or if you disagree with a content decision, you can learn about your resolution options and proceed with the best one for your situation.
How YouTube identifies Community Guidelines violations
How YouTube evaluates Educational, Documentary, Scientific, and Artistic (EDSA) content
How YouTube reviews copyright removal requests
YouTube Partner Program (YPP)
How YouTube enforces YouTube monetization policies
How YouTube reviews YPP applications
How YouTube determines if content should be removed for a privacy violation