Active Stocks
Thu Apr 18 2024 13:15:27
  1. Tata Steel share price
  2. 163.35 2.06%
  1. Power Grid Corporation Of India share price
  2. 282.20 2.86%
  1. Infosys share price
  2. 1,429.65 1.05%
  1. NTPC share price
  2. 357.85 -0.39%
  1. Wipro share price
  2. 450.25 0.37%
Business News/ Opinion / Columns/  Opinion | Shield online platforms for content moderation to work
BackBack

Opinion | Shield online platforms for content moderation to work

Immunity for Good Samaritan moderation will motivate internet firms to take down illegal content

Cars that drive themselves and robots that identify and eradicate weeds are some of the products where artificial intelligence is being used. Photo: iStockPremium
Cars that drive themselves and robots that identify and eradicate weeds are some of the products where artificial intelligence is being used. Photo: iStock


Last week, US President Donald Trump publicly reacted to the protests that followed the killing of George Floyd with a tweet that ended with the words “... when the looting starts, the shooting starts." Shortly after, Twitter, for the first time in its history, decided to hide a presidential tweet behind a warning label that said that his message glorified violence. This decision did not go down well with the Oval Office. Twitter had already fact-checked the President’s allegations of voter fraud through mail-in ballots and it seemed as if Twitter was purposely denying the President of the United States his right to free speech.

The White House swiftly issued an executive order stating that social media companies had to be passive bulletin boards and could not actively restrict speech. If they were going to censor content, they would be treated like content creators and made subject to the liabilities that content creators face. The order went on to refer to Section 230(c) of the Communications Decency Act, 1996, from which intermediaries derive their immunity from prosecution, stating that the provision was not intended to give platforms the freedom to silence viewpoints that they disliked.

Let me state upfront that I don’t believe this interpretation is entirely correct. While sub-section (1) of Section 230(c) does say intermediaries will not be liable for content posted by users, sub-section (2) was specifically designed to allow “Good Samaritan" moderation of online content. Even in the early days of the internet, it was clear that regulators would not be able to moderate content without the assistance of private platforms. Sub-section (2) was supposed to make this possible by giving intermediaries immunity from liability for actions they took in good faith to restrict access to unlawful material. It was believed that with this immunity, internet platforms would have the assurance they needed to moderate the content that flowed through their pipes.

As a matter of fact, things did not exactly work out as intended. Despite the broad protection from liability that sub-section (2) gave them, most internet companies chose to rely on sub-section (1) of that section, setting themselves up to operate as passive publishers of content. In several instances, websites have used this publishers’ immunity to establish businesses which, for all intents and purposes, actively encourage the posting of unlawful content. As a result, instances of hate speech, cyber-bullying, defamation and abuse have proliferated online.

Around the world, the concept of intermediary liability has largely avoided invoking the Good Samaritan direction that the original law seemed to present. In India, Section 79 of the Information Technology Act, 2000 offers intermediaries immunity from liability if they have neither initiated nor interfered with the transmission of the message. Not only does the section make no mention of good faith moderation, it implies that tampering with the transmission of content would mean that immunity is no longer available.

Little wonder, therefore, that intermediary liability jurisprudence in India has moved in an entirely different direction. Rather than encouraging intermediaries to moderate content in good faith, the judgment in Shreya Singhal v. Union of India made it clear that internet companies had no obligation to take down content unless they were expressly instructed to do so by a court order. While this meant internet companies could no longer be arm-twisted to take down content, it offered no protection for take-downs of unlawful content in good faith.

The events of the past week make it clear that the notion of intermediary liability is likely to undergo a rethink. The executive order by the Trump administration called on the Federal Communications Commission in the United States to review the interaction between the various sub-sections of Section 230(c) with a view to ensuring that those engaging in censorship were not able to avail of protections granted to publishers.

In the meantime, the Indian government is about to push through new intermediary guidelines that require internet companies to deploy artificial intelligence tools to identify and filter illegal content. In both instances, Good Samaritan protections for moderation done in good faith seem to have been given a miss.

While a review of intermediary liability was perhaps unavoidable, I don’t believe the experience of the last two-and-a-half decades is ground-enough to discard the concept of Good Samaritan protections entirely. In a recent paper on Section 230 reform, Daniel Citron and Mary Anne Franks suggest that if we draft these provisions more explicitly, we might be able to achieve better results. For instance, rather than merely offering protection for Good Samaritan actions, the law should prosecute bad samaritans, targeting those who permit the publication of unlawful content for punishment. They also suggest imposing a reasonable standard of care so that we can reduce instances of abuse, while at the same time allowing the internet to flourish.

The Indian government would do well to consider these suggestions in the new intermediary guidelines that are currently being planned. After all, forcing intermediaries to use artificial intelligence tools for the moderation of internet content without giving them any good faith protections is unlikely to end well.

Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 02 Jun 2020, 10:49 PM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App