Systems thinking can fix content moderation problems

Photo: iStock
Photo: iStock

Summary

Keeping what goes online in check requires a systemic rather than case-by-case approach

Content moderation is one of the more vexatious problems of modern data governance. Given the sheer volume of content generated online, it is virtually impossible to monitor everything that is said. And since users come from across the spectrum of humanity, they have a wide diversity of cultural, social and political beliefs. All this makes it extremely difficult to assess what might cause offence and to whom. Since content moderators have to strike a balance between removing illegal content and upholding users’ rights to freedom of speech, figuring out what to do is often a tightrope walk fraught with tension.

Internet platforms have tried to address the problem by building a range of systems and processes. They use automated tools to filter out obviously offensive content. Where the algorithm cannot conclusively determine whether or not a given piece of content is offensive, it escalates the decision to a team of human moderators. These teams, in turn, rely on highly detailed moderation manuals and precedents to evaluate whether or not a given item of content should stay up or not.

But even this is not without flaws. Automated tools are nowhere near as accurate as one might think they would be by now. As a result, perfectly acceptable content gets automatically removed more often than should be the case. At the same time, offensive material remains online for longer than is necessary simply because machines are unable to fully grasp the context and complexity of human thought.

As for human moderators, they each bring their own unique biases to the table. As a result, no matter how detailed the content moderation manual might be, many of their moderation decisions lack the level of impartiality needed for them to be fair.

Regulators around the world are struggling for a solution to the problem. Their approach so far has been to hold companies liable for their errors—blaming them for keeping content up too long or not taking it down as quickly as they should have. This approach, according to Evelyn Douek, is misguided since most failures in content moderation relate less to the actual incident and more to flaws in the way that the actual moderation system is built. Instead of trying to fix the consequences of individual moderation decisions, she argues, we need to focus on the upstream design choices that were the reason why those mistakes were made in the first place.

To do this, we will need to change the way we currently do things. Instead of trying to scale traditional moderation workflows, she suggests we deploy a “systems thinking" approach. This will necessarily call for structural changes to the organisations responsible for content moderation. They will need to put in place robust ‘Chinese walls’ between those in the organization who are responsible for the enforcement of moderation rules and other functions such as product development, customer growth and political lobbying. More often than not, the content moderation decisions in these companies are hijacked by commercial considerations thrown up by other departments.

Douek also suggests that we shift the focus of complaints away from demanding redress for specific moderation decisions to finding broader systemic solutions to address the underlying flaws in the moderation systems. This, she points out, will generate a far more effective set of changes that will benefit the larger community of users in a more substantial way.

All this needs to be accompanied by a brand new approach to regulatory supervision. Instead of sitting in judgment over individual moderation decisions, regulators need to take a more systemic view of the problem. They need to first put in place audit mechanisms that will help assess whether the overall content management strategy is capable of adequately addressing the moderation challenges it will face. Europe’s Digital Services Act, for instance, requires large platforms to prepare annual risk assessments that list the systemic risks that are likely to occur and explain how these risks could affect their moderation systems.

Instead of focusing on individual instances of questionable moderation, regulators should require platforms to analyse their decisions in the aggregate—for instance, all adverse decisions in a given category of rule violation over a one-year period instead of each individual one. This sort of an approach will have more long lasting, system-wide benefits that will improve future outcomes.

To be clear, Douek’s approach does have its share of sceptics. Mariela Olivares, in her paper, ‘Of Systems Thinking and Straw Men’, argues that Douek has oversimplified the complexities involved in moderation. Focusing exclusively on systems thinking could, she argues, end up reinforcing existing power structures and biases, instead of offering practical solutions.

That said, this is the sort of thinking we need to adopt if we are to have any hope of finding a viable solution. There is little argument that the current system is broken, and unless we take a different approach, we will struggle for a good response.

As India begins work on the draft Digital India Act, I hope that we will remain open to novel alternative approaches. Rather than focusing our regulatory efforts on how to best adjudicate individual moderation decisions, we need to put in place substantive, long-term solutions that address the systemic problems at the heart of the issue.

Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS

Switch to the Mint app for fast and personalized news - Get App