Home >Opinion >Columns >Opinion | Will Facebook’s oversight board sanitize the internet?
Photo: AP
Photo: AP

Opinion | Will Facebook’s oversight board sanitize the internet?

We must solve our problem of content moderation if such a body is to have any effect at all

Last week, Mint reported that Facebook had extended its announcements on an oversight board that it formed some months ago. The report spoke of what the board is aimed at, how it is structured, and provided an analysis of Facebook’s record on free speech. Separately funded, the board is not supposed to be under Facebook’s control. It will provide independent reviews of content and act as a “supreme court" for people appealing against its internal review processes for taking down objectionable content. It can override decisions taken by Facebook’s own internal watchdogs and committees. This won’t include sovereign government requests, for which it has a separate process.

The board will now have over 20 members from across the globe, including Sudhir Krishnaswamy, vice-chancellor of National Law School of India University in Bangalore, human rights advocates, a former US judge, a former prime minister of Denmark and a Yemeni Nobel laureate.

For now, the board will focus only on Facebook and its subsidiary Instagram. Its aegis may expand to include its other platforms, like WhatsApp, but the real story lies in the impact it is likely to have on the internet per se. For some time now, a debate has been raging on whether online social networks are just conduits for free speech or whether they are reporters of news, which would call for stricter governmental control and make them liable for acts of defamation and libel. Plus, such platforms have also been acting as conduits for actual news, often without acknowledging or paying news organizations for content. This has put Facebook, Google’s YouTube, and others in the crosshairs of regulators in many countries. In Europe, they recently ruled that Google needs to pay news organizations for showing their content on its searches.

Facebook’s move is smart. Firms with immense market power are better-off self-regulating, before official bodies impose fines or clamp down. At the extreme end, large companies may possibly be broken up, as happened in the past to firms such as AT&T and IBM. But it seems likely to me that other governments will see Facebook’s model as a good way to introduce legislation that requires internet companies to form similar oversight devices. The danger of it lies in the creation of yet another layer of red tape and reporting requirements before the internet has a chance to truly discover how such mechanisms could self-correct.

If internet platforms with contentious content, such as YouTube, Snapchat and TikTok, see that governmental regulation could take a course that their lobbying cannot stop, they will look for another way to head off regulation. Investing in Facebook’s oversight board and submitting to its authority could be just the opportunity they are looking for.

This would leave the policing of internet content in private hands, just as the internet domain system was privatized in 1998. This was originally run by the US Department of Commerce, until it realized that website name control was a global issue best left in private hands.

All this noise about a global oversight organization should not take our attention away from the front line in the war against reprehensible content. People with sick minds are everywhere, including on the internet, and routinely post hate speech, child pornography, crime videos, and live streams of the torture, maiming and killing of animals and humans online. Facebook, YouTube and others, in representations, have argued that artificial intelligence (AI) will be able to police such content and not allow it onto their platforms.

The sad truth is that AI is nowhere near being able to vet and keep out objectionable content the way it needs to be. And this is where the true travesty of Big Tech’s Janus-like approach lies. Last year, a disturbing report on theverge.com pointed to the underbelly of policing content in the world of social networks online.

Facebook and other social network firms allegedly do not pay much attention to their workers—or those of contracted firms—who monitor their gigantic sites for objectionable content.

The aforementioned report presented a chilling and disturbing view of the operations of an information technology service provider retained by Facebook to monitor content on its platform. Evidently, at least one employee died of a heart attack suffered while he was at work. These guardians of our mental health are poorly paid and frequently have to endure audio-visual content that graphically depicts people’s inhumanity towards fellow humans and other sentient beings. The job of the censors who must see all this is unenviable, for it is they who must expunge the horrors that twisted human beings put up online and whose decisions Facebook’s new “supreme court" will now sit in judgment of.

It may be true that advances in AI will relieve workers of the trauma that these jobs could inflict, and the sooner such technology can help the process of filtering out stuff that should not be available to anyone in the world, the better.

But we cannot let vague promises about the future of AI, and the politically strategic but still sanctimonious setting up of independent oversight boards, give social media platforms a pretext to turn a blind eye to the mental torture faced by real human beings who are toiling on the front line of internet content control.

Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePapermint is now on Telegram. Join mint channel in your Telegram and stay updated

Close
×
My Reads Redeem a Gift Card Logout