OPEN APP
Home >Opinion >Views >An effective way to squash foul content on social media

A couple of months ago, Spotify chief executive officer Daniel Elk found himself in a spot. No sooner had he signed on Joe Rogan, a wildly popular podcast host (allegedly paying over $100 million), than his employees were up in arms over transphobic comments made on his show. There was no question that the episode in question was offensive to the LGBTQ+ community—but Elk was worried about the free-speech implications of censoring this content despite its offensive nature.

In the pre-digital world, content was only distributed by companies that reviewed every last word before it was made available to the public. As they were liable for everything they put out, they employed large editorial teams to balance their need to report news with considerations of accuracy, decency and the law.

Digital platforms on the other hand never had to worry about oversight. From the early days of their existence, they were shielded from prosecution by intermediary liability protections. As a result, they focussed on making it as easy as possible for content to flow from producers to consumers—only taking it down later if someone complained.

As digital outlets became mainstream, the unfiltered anarchy that these platforms spawned began to reveal its dark side. The content we were getting exposed to was more offensive and deeply divisive than anything we’d experienced before. It soon became clear that the lack of moderation in the digital environment was bringing out the worst in people, and giving the worst sorts of people the sort of stage they would otherwise never have had.

Facebook’s response to this has been to establish an independent oversight board, to which appeals against the decisions of Facebook’s moderation teams can be referred. This approach keeps in place the tiered (algorithmic + human) content moderation systems that Facebook currently employs, but adds on a layer of redressal to deal with edge cases.

As interesting as this is as a governance solution for the internet age, it is not without its shortcomings. In the first place, Facebook’s oversight board only comes into play after the fact. By the time a decision is referred to it, various parties would thus have suffered as a consequence of Facebook’s moderation decision. Secondly, the board is limited by the number of cases that it can actually review. No matter how many appeals it manages to hear, that will still only be a tiny fraction of all the moderation decisions that parties are dissatisfied with. Finally, as much as the board’s members have been selected with a view to make it regionally representative, it is impossible for an essentially international body interpreting a set of community standards to properly address regional concerns.

This is at the heart of the problem. Traditional media companies have always respected the inherent diversity in global values and legal norms by developing regional strategies for distribution. Books were simply not shipped to countries in which they were banned. Films abided by the decisions of censor boards in each of the countries in which they were distributed—making specific cuts that were required by local regulators before they were shown in local theatres and television stations.

Digital platforms, emboldened by internet exceptionalism, have simply ignored these variances, attempting to uniformly apply community standards to all their moderation decisions. Granted, these standards are based on the liberal values to which all modern democracies aspire, but even so, platforms have struggled to strike a balance between taking down offensive content and protecting their users’ rights to free speech.

This is the quandary that Elk found himself in when faced with an internal revolt over Rogan’s unabashed transphobia. It is what all users face whenever they try to get social media firms to take down offensive material.

Most of us believe that it should be easy for digital platforms to implement more effective technical solutions. After all, they have demonstrated that their algorithms can deliver content that is narrowly targeted at the users who would most appreciate it. If they can do this with such fine-grained precision, surely they can infer what content will be offensive to whom and take it down before it does any damage.

When we look for solutions, we tend to think in absolute terms. We want offensive content to be completely expunged from digital platforms so that it has no chance of infecting our minds with its bile. It is impossible to implement an absolute solution like this without curtailing some of the rights of those who posted it. And while it is true that all speech is subject to reasonable restriction, digital platforms neither have the ability nor the legal authority to determine where that line should be drawn.

What if we can take a less binary approach? Social media companies have powerful amplification tools that they use to promote popular content. What if we can insist that they use these tools in reverse, so that instead of amplifying provocative content, they dampen its virality. Instead of promoting this content, they use their tools to aggressively prevent offensive content from trending online.

As long as they don’t take down content, they can’t be accused of violating freedom of speech. And, as we all know, any content that is hard to find might as well not exist.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Close
×
Edit Profile
My ReadsRedeem a Gift CardLogout