With hundreds of hours of new content uploaded to YouTube every minute, we use a combination of people and machine learning to detect problematic content at scale. Machine learning is well-suited to detect patterns, which helps us to find content similar to other content wefve already removed, even before itfs viewed.
We also recognize that the best way to quickly remove content is to anticipate problems before they emerge. Our Intelligence Desk monitors the news, social media, and user reports to detect new trends surrounding inappropriate content, and works to make sure our teams are prepared to address them before they can become a larger issue.
The YouTube community also plays an important role in flagging content they think is inappropriate.
If you see content that you think violates Community Guidelines, you can use our flagging feature to submit content for review.
We developed the YouTube Trusted Flagger program to provide robust content reporting processes to non-governmental organizations (NGOs) with expertise in a policy area and government agencies. Participants receive training on YouTube policies and have a direct path of communication with our Trust & Safety specialists. Videos flagged by Trusted Flaggers are not automatically removed. They are subject to the same human review as videos flagged by any other user, but we may expedite review by our teams. NGOs also receive occasional online training on YouTube policies.
Sometimes videos that might otherwise violate our Community Guidelines may be allowed to stay on YouTube if the content offers a compelling reason with visible context for viewers. We often refer to this exception as gEDSA,h which stands for gEducational, Documentary, Scientific or Artistich. To help determine whether a video might qualify for an EDSA exception, we look at multiple factors, including the video title, descriptions, and the context provided.
EDSA exceptions are a critical way we make sure that important speech stays on YouTube, while protecting the wider YouTube ecosystem from harmful content.