(Photo: Bloomberg)
(Photo: Bloomberg)

YouTube updates policies to remove hateful, violent content

  • The company will remove content denying that well-documented violent events
  • YouTube wants to reduce the spread of content that comes right up to the line

New Delhi: YouTube announced several updates to its policies around tackling hate and protecting the YouTube community from harmful content on Thursday.

In 2017, the company introduced a tougher stance towards videos with supremacist content, including limiting recommendations and features like comments and the ability to share the video. This step dramatically reduced views to these videos (on average 80%). And from today, YouTube will prohibit videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, the company will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.

The company said that while some of this content has value to researchers and NGOs looking to understand hate in order to combat it, they are exploring options to make it available to them in the future. And as always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events.

In addition to removing videos that violate company policies, YouTube also wants to reduce the spread of content that comes right up to the line. In January, the company piloted an update of their systems in the US to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat. Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50% in the US.

Finally, it’s critical that YouTube's monetization systems reward trusted creators who add value to YouTube. The platform has longstanding advertiser-friendly guidelines that prohibit ads from running on videos that include hateful content and in order to protect the ecosystem of creators, advertisers and viewers, the company tightened its advertising criteria in 2017. In the case of hate speech, it is strengthening enforcement of the existing YouTube Partner Program policies. Channels that repeatedly brush up against their hate speech policies will be suspended from the YouTube Partner program, meaning they will not be able to run ads on their channel or use other monetization features like SuperChat.

YouTube will begin enforcing this updated policy today, and will gradually expand coverage over the next several months.

Close