Social media platforms have an out-of-Africa problem

Interestingly, it appears that TikTok may be the best designed social media platform for war in that it is a video platform that is also instant. (AP)
Interestingly, it appears that TikTok may be the best designed social media platform for war in that it is a video platform that is also instant. (AP)

Summary

The violence that AI cannot filter out gets outsourced as a risky filtration task to workers in Africa

Many people are watching snippets of the war in Ukraine on their phones. Social media is deluged with photos, video clips and satellite images. Satellites have photographed Russian bases and shown the destruction caused by Russian attacks on Ukrainian cities. However, a satellite can take a picture of a particular place only once a day, assuming there is good weather with no cloud cover. After this, the satellite orbits away at speed.

As we know, the picture painted by online posts, whether on war or otherwise, is not always accurate. Social media is no different during war time than it is during peace time. Thousands of videos and photos are being posted daily from Ukraine, but most people just see the handful that get the most ‘likes’ and ‘shares’. In some cases, such as TikTok’s, this is because the platform’s algorithms are extraordinarily well tuned to present what its users want to see.

Interestingly, it appears that TikTok may be the best designed social media platform for war in that it is a video platform that is also instant. YouTube needs an elaborate video set-up before one can post a clip on the platform, and other social media sites such as Twitter, Meta’s Facebook, Twitter, Instagram and others have a whole host of non-video content as well. All a person needs to post a video grab on TikTok is the camera that all smartphones come equipped with today. The platform is easy to use. Short clips can be posted on TikTok instantly, thereby creating an almost endless stream of war footage ‘crowdsourced’ to anyone on the ground with a smartphone.

On the other hand, all social media platforms have been doing too little to moderate the violent content that gets spread. They claim that the artificial intelligence (AI) they use is up to the task of filtering out the truly despicable content that some people choose to post online. Despite these claims, sweat shops existed in the US for outsourced work that involves having a bunch of low-paid employees view gruesome videos posted online by the dark underbelly of humanity. I have written in this space before that these sweat shops have been documented to drive their workers to psychological breaking points while doing their traumatic jobs. The fact that such sweat shops exist is testament to the fact that an ‘AI can do everything’ approach does not work.

And this is where the travesty behind Big Tech’s Janus-like approach lies. In 2019, a disturbing report in The Verge (bit.ly/472zNaL) pointed to the dark and despicable work of policing content in the world of social networks online. Facebook (now Meta) and other social network firms allegedly do not pay much attention to their workers—or those of contracted firms—who monitor their gigantic sites for objectionable content.

That report presented a chilling and disturbing view of the operations of an information technology service provider retained by Meta to monitor content on its platform. Evidently, at least one employee died of a heart attack suffered while he was at work. These guardians of our mental health are poorly paid and frequently must endure audio-visual content that graphically depicts people’s inhumanity towards fellow humans and animals. The job of the censors who must see all this is unenviable, for it is they who must expunge the horrors that twisted human beings put up online. According to The Verge, these centres were outsourced to IT firm Cognizant, and were located in the US in Tampa, Florida.

I am told that these centres in the US have now been shut down, but it appears that Big Tech and its outsourcing providers have now just offshored this trauma-tinged work to a different part of the world. A Wired magazine report (bit.ly/3pSKwUJ) makes note of this by citing a recent ruling from a Kenyan court. It states: “A court in Kenya issued a landmark ruling against Meta, owner of Facebook and Instagram. The US tech giant was, the court ruled, the ‘true employer’ of the hundreds of people employed in Nairobi as moderators on its platforms, trawling through posts and images to filter out violence, hate speech and other shocking content. That means Meta can be sued in Kenya for labor rights violations, even though moderators are technically employed by a third-party contractor." This report also claims that the magazine has access to TikTok’s internal documents that were leaked to a non-governmental organization named Foxglove Legal.

According to these documents, TikTok has been watching the court proceedings in Kenya since it too has outsourced arrangements in Kenya and other developing countries in Africa such as Morocco through the Luxembourg-based Majorel. And these engagements centre around having human moderators watch and reject videos that get posted online.

It is true that some such videos are sometimes of value, especially to law enforcement agencies, as we have recently seen in India where perpetrators of horrific mob violence in Manipur can now be brought to book. Or against excessive use of force while policing, as we have recently seen in France. Nonetheless, having to sift through videos day in and day out that show humankind’s inhumanity to humans is patently horrendous for anyone’s mental wellness.

War is abhorrent, whether it happens in Ukraine or elsewhere. And a human being’s mental wellness is just as fragile, whether inhumane videos are watched in America or in Africa.

Siddharth Pai is co-founder of Siana Capital, a venture fund manager.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS