The news this past week has been all about Elon Musk’s takeover of Twitter and how things are changing in the organization since he assumed the designation of ‘Chief Twit’. While much of the discussion has been around staff layoffs, in the circles I inhabit, equal attention is focused on how his changes will affect the quality of speech on the platform.
Twitter has always seen itself as the world’s public square—the platform everyone goes to for news on events as they unfold. It is one of the few mainstream spaces where people can still speak their mind without fear of being censored; where all the most powerful protest movements of our time—from the Arab Spring to Iran’s—began. It is also the place where some of the most thorny questions around content moderation are getting addressed: What speech should be permitted and on what ground must posts be taken down.
As with all difficult problems, there are no easy solutions. Twitter has had its share of controversy, from the suspension of a prominent Indian lawyer for posting an anti-fascist profile picture to the de-platforming of former US President Donald Trump. It has been taken to task by governments for failing to heed their instructions and by civil society for giving in too easily to their demands.
Musk said he wants to fix this and make Twitter “the most accurate source of information about the world”. So far, his ideas have ranged from setting up a content moderation council to making blue-tick ID verification available for a price. But I am not sure whether all it takes to solve a problem of this magnitude is just a new hand at the wheel.
Content moderation is a wicked problem. Posts that some see as banal, others view as objectionable. Because these conversations are accessible from everywhere, content that might be acceptable in one part of the world could well be blasphemous elsewhere. No platform can ever hope to moderate content to everyone’s satisfaction, particularly where the issues involved need resolution at global scale.
The core challenge is context. Content that is innocuous in one context can be deeply offensive in another. Moderation decisions need to account for nuances across region, language and social group. For a global platform like Twitter, arriving at even the semblance of a balanced outcome in every instance is near impossible.
Platforms have attempted to deal with this by providing content moderators step-by-step instructions on how to take these decisions. They’ve reduced the moderation process to a set of carefully scripted workflows that describe in as much detail as possible what can stay up and what should be taken down. But no amount of detail can ever eliminate the subjective biases that human moderators bring with them. As a result, decisions on functionally similar pieces of content can have very different outcomes.
The problem lies not with the mechanics of the content moderation process, but with the fact that we are attempting to carry out this moderation on a single global platform. By having all our conversations in the same noisy space, we’ve increased the odds of conflict between different groups with opposing viewpoints. At the scale at which this is taking place today, it is impossible for anyone to hope to effectively moderate it.
The answer lies not in improving the quality of moderation, but in federating our conversations. We need to make it possible for groups of like-minded individuals to organize themselves into dedicated online spaces where they can say what they want without it being taken out of context and causing offence.
These spaces could be allowed to connect through interfaces that users can adjust, so that they only get to hear and see content they want based on their personal tolerance for different types of speech. If we do this, we will move decision-making away from the centre and towards the edges of the network. In doing so, we can ensure that power is no longer concentrated in the hands of a few decision-makers in large tech companies.
To implement this, we will need to make a shift from platforms to protocols. We will have to renounce our dependence on platforms for our social networking needs and embrace federated social networking based on protocols that offer the freedom to create content and interact with others in a way that we can more fully control.
This is not as daunting as it might sound. For a while now, federated networks like Mastodon have provided their small but growing user base a calmer, less frenetic alternative to existing social networking platforms. Others like Blue Sky are in the works. These new environments take some getting used to, particularly for those of us accustomed to the dopamine-inducing algorithms that power popular platforms. But once we understand how they work, the freedom and flexibility they offer are refreshingly different.
None of this should feel strange to us in India. We have for over a decade now eschewed platforms in favour of protocols. All our digital public infrastructure—from UPI to DEPA and Beckn—has been built in this manner, demonstrating how diverse digital systems can link up to share data while still enabling private innovation and customer-focused innovation.
It’s time to take the lessons we’ve learnt and apply them to social networks.
Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
MoreLess