The realization that the President of the United States might have been directly responsible for the assault on the US Capitol prompted all major social media platforms to terminate his accounts for fear that a milder course of action might have resulted in further incitements of violence. Never before have private companies acted to muzzle the ability of the elected leader of one of the most powerful countries in the world to communicate with the world. Then again, at no previous time in history has it been possible for private entities to do so.
The internet is the most efficient data communication network we have ever built. For the most part, this is because its transport layer is designed to be data blind—responsible for transporting data packets without knowing what they contain. The closer you get to the edges of the internet, this blindness dissipates, and, because internet platforms are often aware of the content they host, they could be held liable for any offensive or illegal user-generated content found on their platforms.
It was in order to protect the fledgling internet industry from this liability that the US Government enacted Section 230 of the Communications Decency Act, granting internet intermediaries immunity from content liability. In doing so, the US sought by operation of law to create a presumption of blindness for businesses at the edge, replicating through legal fiction the design of the pipes at its core. They made this legal presumption of blindness conditional on the fact that just like the pipes at the core of the internet, companies at the edge refrained from interacting with the content being shared on their platform—serving them up as is, without any moderation whatsoever. This is why internet companies have historically been reluctant to take down content, preferring to be directed to do so by a court, rather than taking such decisions on their own.
But, as the internet evolved, a few large companies at the edge of the internet became gateways to our interactions online, functioning as funnels for user interaction. With the rise in user numbers, it was no longer feasible for these platforms to operate as dumb pipes, serving up content without heed to what it contained. As their international footprint grew and regional variations of law and convention forced them to contend with different requirements in every new country they expanded into, this problem was only exacerbated.
It soon became apparent that they had no option but to function as gatekeepers for user behaviour. They were forced, as a result, to develop more and more detailed codes of conduct, setting out how users were expected to behave on their platforms, even though by doing so they risked losing their immunity from prosecution.
These codes of conduct are the basis on which the internet functions today. A failure to comply with them can result in suspension from these platforms, and given how important these platforms are to our day-to-day interactions, the threat of being cut off has had a powerful coercive effect. Every now and then, these provisions have been invoked to suspend (or even expel) users whose behaviour has been egregious. Unlike in the early days of the internet, when online companies did all they could to avoid playing this role, today it is the platforms at the edge that are determining what is right or wrong—and doing so on the basis of the values and principles enshrined in their codes of conduct.
Until last week, it was not entirely clear exactly how far this ability to control user behaviour would be taken. Despite the power that they wield, social media companies have always exercised restraint, particularly when it came to censoring the speech of persons with political influence. But following the assault on the US Capitol, every social media company independently came to the conclusion that a line had been crossed, and that the access of the President of the United States to online audiences needed to be curtailed.
While much of the commentary since then has been about how social media companies have too much power, I believe the question we should really be asking ourselves is why this came to pass. When we gave companies at the edge of the internet immunity from liability, we did so because we believed that communication infrastructure should have no opinion on the content that it carried. But merely granting immunity from prosecution does not solve the problem of offensive content. At best, it passes the buck. What we needed was a framework for determining acceptable speech—a framework that we should have rolled out at the same time that we extended to internet intermediaries immunity from prosecution. This framework could have been designed for the internet, giving online companies the ability to bake these restrictions directly into the tools and filters they use to automatically regulate content. Had we done this, it would have given us an appropriate counterpoint to intermediary liability protection—and would have saved us from the situation we have found ourselves in.
It is not too late to remedy this. Even now, governments can take it upon themselves to develop prohibited content dashboards that provide internet companies clear directions on what sort of content is allowed and what is not. Not only will that provide clarity as to what is permitted, it will properly vest responsibility where it should lie.
Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.