Following the New Zealand shooting, countries around the world scrambled to put in place regulations to address this latest challenge of the internet age (iStock)
Following the New Zealand shooting, countries around the world scrambled to put in place regulations to address this latest challenge of the internet age (iStock)

Opinion | Of content filters and personal privacy

It seems that governments around the world have come to the conclusion that internet platforms must be made liable for the content that flows through their pipes and that unless strict liability is imposed, it will be impossible to control the rapid proliferation of viral content

As abhorrent as the Christchurch massacre was, what made it even more horrific was the fact that it was live-streamed on social media. Thanks to smartphones and high-speed cellular internet, it has become trivial for anyone to broadcast live video from anywhere. And while the vast majority of us use this to share live sporting or entertainment events with our followers, the shooting in New Zealand was a grim reminder of how technology can be misused.

Immediately following the shooting, countries around the world scrambled to put in place regulations to address this latest challenge of the internet age. First out of the blocks was Australia which, with its enactment of the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act, imposed fines of up to 10% of turnover on any internet service that failed to expeditiously remove abhorrent violent material from its service. A White Paper published by the UK government last month recommended the enactment of a new statutory duty of care that will force internet companies to take more responsibility for the safety of their users. Earlier this year, Germany passed a hate speech law that requires all “obviously illegal" content to be deleted within 24 hours.

It seems that all of a sudden, governments around the world have independently but simultaneously come to the conclusion that internet platforms must be made liable for the content that flows through their pipes and that unless strict liability is imposed on those who control the gateways, it will be impossible to control the rapid proliferation of viral content. As reasonable as this conclusion might be, the effect of legislative amendments such as these will be to take us down a path diametrically opposite from that which we have been on so far. It will put in place a value system fundamentally different from that on which much of the modern internet has been built—under which internet service providers had immunity from prosecution for the content flowing through their networks. It is important that we understand the consequences that this could have on the way the internet currently works.

The immediate fallout of imposing strict penalties and turnover-based fines on inappropriate content is that the large internet platforms will re-tune their filters to remove content far more aggressively than they currently do, erring on the side of caution to ensure that any content that has even the slightest chance of getting them into trouble is taken down before it results in liability. Filters that previously would have only picked up false negatives, will be re-calibrated to generate false positives—blocking otherwise allowable content rather than run the risk of letting marginal content slip through.

We’ve already seen the chilling effect that overly sensitive automated content filters can have on speech. In the early days of the internet—before internet service providers were shielded from liability—crude filters were used to automatically take down content that might have got ISPs into trouble. These filters erred on the side of caution and the early internet was replete with instances where artistic masterpieces and scholarly texts from medical journals were censored because filters classified their content as pornographic.

Even if we account for improvements in contextual Artificial Intelligence (AI), the aggressive punishments these new regulations impose leave internet companies with no option but to calibrate their systems to be safe rather than sorry. This will inevitably move us into a world where it is the internet companies that get to decide whether content is offensive or not and since these platforms operate globally, their filters will be designed to meet the standards of the least permissible jurisdiction they operate in. Whether we like it or not, the entire planet will have to adhere to the standards set by its strictest country.

All this might have been worth it if we could be sure that imposing stringent obligation on the guardians of our internet gateways will get us what we want. To the contrary, history has shown us that illegal content finds a way to thrive despite the best legal and programmatic attempts to eradicate it. When the Chinese government banned the #MeToo hashtag in order to keep the movement out of the country, activists began instead to use #Mitu (a Chinese homophone for MeToo that means “Rice Bunny") to bypass the filters. Even today, criminals trading in drugs have their own special hashtags and innocuous search phrases that customers can use to access their products on popular social media services.

By far the most worrisome implication of this new regulatory trend is the impact it will have on personal privacy. If there is one thing we have learnt over the last couple of years, it is that large internet platforms know more about us than we are comfortable having them know. Now that these new content regulations have come into force, tech companies are legally obliged to scrutinize the content that we publish on their networks even more closely as that is the only way they will be able to identify illegal content fast enough to take it down. This gives them the legal authority to intrude into our personal space more thoroughly than ever before.

As much as I can appreciate the pressure that governments are under to deal abhorrent content, I wonder whether the solution they are proposing is worth the consequential impact on our privacy.

Rahul Matthan is a partner at Trilegal and author of ‘Privacy 3.0: Unlocking Our Data Driven Future’

Close