Meta, YouTube use AI to remove ‘war crime’ videos but results continue to be subpar
2 min read 01 Jun 2023, 06:57 PM ISTMeta and YouTube are using AI to remove disturbing content from their websites, but experts are sceptical of its effectiveness in distinguishing between violent content and evidence of human rights abuses.

Meta and YouTube are using artificial intelligence (AI) to remove any disturbing content from their websites, BBC reported on Thursday.
The report further said, social media giant like Meta and world's largest video platform YouTube are using AI to remove ‘disturbing’ content from their websites. However, experts are sceptical of the move. They believe that the AI technology is not able to distinguish between videos of violence and videos that could be evidence of human rights abuses.
For example, Ihor Zakharenko, a former journalist, tried to post videos of human rights abuses by the Russian Army in Ukraine on Facebook and Instagram. The videos were swiftly taken down. Zakharenko then tried uploading the videos again on dummy accounts, but three of the four videos were taken down in just 10 minutes, BBC report said.
Content moderation policies
The social media companies' community guidelines allow for videos of graphic violence to be removed, but they also state that such videos may be allowed if they are shared in relation to important and newsworthy events. However, experts say that the AI technology is not able to make these kinds of nuanced judgments.
“Graphic violence is not allowed and we may remove videos or images of intense, graphic violence to make sure that Instagram stays appropriate for everyone. If shared in relation to important and newsworthy events, and this imagery is shared to condemn or raise awareness and educate, it may be allowed," Instagram's community guidelines said.
“Violent or gory content intended to shock or disgust viewers, or content encouraging others to commit violent acts, are not allowed on YouTube," it said.
YouTube's guidelines mention specific scenarios like road accidents, natural disasters, war aftermath, terrorist attack aftermath, street fights, physical attacks, immolation, torture, corpses, protests or riots, robberies, medical procedures, or other such scenarios with the intent to shock or disgust viewers.
Balancing act
Meta and YouTube do promise to strike a balance between their duties to bear witness and protect their users from harmful content. But, in practice using a general artificial intelligence tool to remove every type of violent content is far from maintaining any balance.
Experts quoted by the BBC suggest that social media companies should either remove AI from the content moderation process and rely on human reviewers to make decisions about what content to remove, or, develop AI that is better at distinguishing between videos of violence and videos that could be evidence of human rights abuses.
"Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!" Click here!