(iStock)
(iStock)

Opinion | When horror goes live on the web and fools fail the world

There exist AI tools to scan objectionable content but human decisions on intervention result in little or no action taken

Last Friday’s horrific shootings at two New Zealand mosques yet again highlighted Big Tech’s inability to manage its spawn. Mint reported on Saturday that social media platforms, including Facebook Inc., are facing harsher scrutiny after the shooter appeared to have live-streamed his murders on the internet for 17 nauseating minutes.

The shooter had given advance warning on social media channels such as 8chan, but this didn’t flag anything. Big Tech’s algorithms and content policing platforms tried to contain the damage post-facto. Despite this, various versions of the video were readily available on YouTube, accessible via simple keywords such as the shooter’s name. To no one’s surprise, TV and media channels started airing some of the footage while reporting the event.

The repulsive video even made its way to my school classmates’ WhatsApp group. I deleted it and asked the person who posted it to take it down. These are men who are all in their 50s. Do we not think, for at least a second, before we share such abhorrent content?

It looks like there is an imbecilic tendency latent in human nature that wins out. Umair Haque, in a blog post titled The Age of the Imbecile that appeared on the website Medium.com last March, defines our current age as “catastrophically stupid". Haque’s piece is wide ranging. It touches on several socio-economic and political issues and scolds the reader by saying that it is we individuals the world over who have actually chosen all this and made our present world one of “futility, emptiness and hollowness".

Facebook’s and Google’s spokespersons put out the usual statements such as “Our hearts go out to the victims of this terrible tragedy. Shocking, violent, and graphic content has no place on our platforms, and we are working actively to remove it" and “as with any major tragedy, we will work cooperatively with the authorities".

However, it is possible that Big Tech has grown too large to manage and moderate content that is posted on social media. They all have Artificial Intelligence (AI) tools that are trained to scan objectionable content such as child pornography and violence. However, it appears that those tools are not sufficient by themselves to do the task. They need human moderators to act in addition to the AI tools.

The media outfit Motherboard, which ran coverage on Facebook’s handling of the problem at Motherboard.vice.com when it surfaced, has this to say: “Like any content on Facebook, be those posts, photos or pre-recorded videos, users can report live broadcasts that they believe contain violence, hate speech, harassment or other terms of service violating behaviour. After this, content moderators will review the report and make a decision on what to do with the live stream."

According to an internal training document for Facebook content moderators obtained by Motherboard, moderators can “snooze" a Facebook Live stream, meaning it will resurface every 5 minutes for moderators to check again if anything has developed. Moderators also have the option to ignore it, essentially closing the report, or delete the stream or escalate it to a specialized team to scrutinize it for a particular type of objectionable material. In the case of terrorism, escalation would flag the stream for Facebook’s law enforcement response team, which works directly with the police. In this case, Facebook told Motherboard it had been in contact with New Zealand law enforcement since the start of the incident.

Evidently, these documents say that human moderators are also told to look for warning signs in live videos, such as “crying, begging, pleading" and the “display or sound of guns or other weapons in any context".

On Sunday, Facebook said it had removed 1.5 million videos of the attack that had been posted worldwide, including 1.2 million that were blocked before they could be uploaded. Google, meanwhile, clarified that videos of the shooting that have “news value" will remain up. This brings up a whole host of moral and legal/regulatory questions. Facebook, YouTube and others have carved out specific exceptions for news organizations, which means that the same video clip that is shut down on a poster’s account when it is used for hate speech is allowed to run in a news report by a TV channel that uses social media to extend its reach.

Google’s YouTube has vetting tools that are broadly similar to Facebook’s, but only about 70% of its videos removed are taken down because of search algorithms. According to Google’s transparency site on YouTube videos, approximately 6.2 million videos were taken down by automated programmes out of a total 8.7 million that were deleted last quarter. A further 1.9 million were taken down by a human “Individual Trusted Flagger", and 600,000 because of user flagging. Non-governmental organizations accounted for about 29,000.

A paltry 52 of the 8.7 million take-downs were traced to government agencies. To my mind, that can only mean one of two things: (a) that the company is doing such a phenomenal job that governments have only had to request take-downs in rare cases, or (b) that governments are simply not doing enough, and are instead relying on social media platforms to resolve these issues themselves. In my opinion, it is the second.

I am at a loss to understand why governments do not get tough with these platforms. Why isn’t there punitive action? Europe has tried to float a proposal to fine platforms that allow such hateful content to remain online for more than an hour. Maybe it is time that the proposal was passed, not just in Europe but also across the world.

Siddharth Pai is founder of Siana Capital, a venture fund management company focussed on deep science and tech in India.


Close