Home >Mint-lounge >Features >Policing the unruly mob on the Internet
Just letting people know specifically what is undesirable about their behaviour can temper a lot of the vitriol that laces online communities. Photo: iStockphoto
Just letting people know specifically what is undesirable about their behaviour can temper a lot of the vitriol that laces online communities. Photo: iStockphoto

Policing the unruly mob on the Internet

Big social networks should stop making excuses about being hotbeds of abuse and learn the art of control from smaller communities

In my early 20s, I was booted off an online community that had become an integral part of my social life. PaGaLGuY had started off as a network for Indian B-school students and aspirants, but the discussions had expanded beyond education, to topics such as cricketer Sachin Tendulkar being overrated and the death penalty. There was a core group of a few hundred users who weighed in on almost every discussion, and after a month of regular use, I knew each of them; I knew what their general world views were and could predict what their stance would be on a topic.

Creating multiple accounts was what I was eventually banned for. It was an inaccurate charge, but it was also most likely not the real reason for my banishment. See, I was an arrogant young adult, who had taken it upon myself to rid PaGaLGuY of its ingratiating piety. The moderators spotted this puerile streak early on, but I did not take kindly to being told I should use the word posterior instead of ass, or that I should not have gone on to the thread for inspirational quotes and posted, “Sometimes when there’s too much to do, it’s best to do nothing at all."

The mods were a bunch of smug goody-goodies who were turning a community of vibrant ideas into a marsh of conformism. I had to save PaGaLGuy, and so, backed by a few members who either believed, like me, that they were on a crusade, or were simply habitual troublemakers, I began to fight the mods, argue with them, even insult them, refusing to be cowed by their wagging fingers. And so they took their fingers, pressed a button and I was out on my… posterior.

I was angry, but I was also hurt and embarrassed. PaGaLGuy had felt like a real community. I knew who had been “liking" my posts and who had been “groaning" at them. People had told me what they thought of my writing and chatted with me on the “shout box". And now this community had made clear that it didn’t want me.

Twitter does not feel like a real social setting. Nor do Facebook or Instagram, or any of the other massive social networks that now dominate traffic on the Internet. These feel more like abstract worlds where nobody knows anyone. There are no moderators watching you, no core of regulars whose opinions you care about, no senior users you can look up to. So, if you’re not an unwavering optimist about human behaviour, you should not be surprised at how dire the statistics surrounding online abuse have become.

A 2014 survey by the US-based Pew Research Centre showed that more than 25% of Internet users have been the recipients of some form of online harassment. Among young women, the number went up to 50%. Death threats, sexual abuse, stalking, shaming—these are all becoming ubiquitous in online communities.

Maneka Gandhi, union minister for women and child development, has announced the creation of a special cell to deal with online abuse. But when she asked the National Commission for Women to monitor cases of cyber bullying, its chief, Lalitha Kumaramangalam, said, “You can’t police the Net. It is an open space...like a galaxy almost. There are billions of Twitter accounts and no organization can keep an eye on Twitter."

There seems to be an acceptance that online abuse is a problem too large to fix, and the big social networks hide behind that excuse. Smaller communities on the Web, however, have been applying modes of moderation that have actually been working for years now.

Josh Millard is the head moderator on MetaFilter, a community weblog with around 12,000 active members that started in 1999. MetaFilter has a hands-on approach to moderation. It has a group of paid moderators, and one is watching the site at all times, interacting with users regularly and responding to queries and complaints within minutes.

“Moderation is not something we do as a last resort when something goes really bad. If there’s something a little weird with a conversation, people can let us know. Then we can go and actively watch that discussion and try to gently nudge things back towards civility," Millard says. On many of the bigger social networking sites, a report of abuse often goes off into the ether, and it is days before you get a response, if ever. And the response is always in impersonal language, usually containing the words “terms of services".

MetaFilter’s mods are all active members of the community. They talk to users and discuss why certain types of comments are disallowed rather than simply banning members without an explanation.

Just letting people know specifically what is undesirable about their behaviour can temper a lot of the vitriol that laces online communities. In 2012, Riot Games, developers of the famous online game League Of Legends, decided to try to curb some of the abuse players were hurling at each other. Just by telling users why they were being banned, Riot Games managed to improve player behaviour significantly.

MetaFilter’s moderators go to the extent of telling users how they can frame their thoughts in a way that is not offensive. “People get angry when they feel they’re being stopped from saying what they want to say how they want to say it, but in most cases you can work with a moderator to say the thing you want to say in a way that’s less problematic," Millard says.

Like MetaFilter, edited social networking news site Fark.com has a staff of moderators, and its founder, Drew Curtis, says it’s a mystery to him why bigger social networks haven’t adopted a similar model. It’s not that hard to get rid of online abuse, Curtis insists. “No one’s willing to step up and make hard choices," he says. “When we banned misogyny, we were warned it would be a huge task. Turns out, not only was it easy, but the quality of Fark’s community improved dramatically and immediately."

Facebook says it receives about one million reports of abusive content every week. That is a scale unlike anything MetaFilter or Fark have had to deal with, and it is not easy for these behemoths to instil anything remotely like a feeling of community. Facebook and several other social networks deal with the problem by outsourcing most of the job to moderation companies, which are often based in countries where labour is cheap, such as India and the Philippines. Employees of these moderation companies are often poorly paid and have to view streams of graphic content every day. Many burn out quickly, and some even develop post-traumatic stress syndrome.

A Facebook spokesperson told us, “Nothing is more important to us than the safety of the people on Facebook. We have zero tolerance for impersonation, hate speech, bullying and harassment, and encourage people to use the reporting tools available on every page so we can investigate and take action." At the moment, however, the idea that the Internet is a space where anything goes has become so accepted that it will take a proactive attempt to even try and change the culture of communities such as Facebook.

Reddit, which gets more than 200 million unique users every month, asks members to volunteer for moderation. Volunteer mods take responsibility for specific subreddits and make decisions on what is allowed in those sections. The 19 mods of the India subreddit refused to speak on this subject. They did not provide a reason, but you only have to look at the abuse Reddit mods have got in the past to guess what their apprehensions might have been. Back in 2012, when Hindustan Times ran an article that simply explained how Reddit was moderated, users hurled insults at the mods for having publicized what they saw as their domain.

A lot of Reddit users seem to believe that moderating abuse is an impediment to free speech. When five subreddits were banned last year, users declared war against the mods. Some of the banned subreddits had been removed as they encouraged fat-shaming, and angry users reacted by launching into tirades about obesity.

The problem with the distributive form of moderation, Millard says, is that you can’t guarantee you are getting people with the right temperament for moderation. Also, people are less likely to respect or trust someone who is volunteering to moderate a site than someone whose full-time job it is to do so.

MetaFilter has a $5 (around 335) fee for signing up. This is an immediate deterrent to anyone who wants to come to the site just to wreak havoc. Plus it stops users from creating multiple accounts when they are booted off the site. It’s a simple device, but it does something that the likes of Facebook, Twitter and Reddit would be unlikely to accept: It slows growth. MetaFilter is fine with that. In fact, Millard says, slow growth helps the mods. “You have a sort of self-policing environment," he says. “Often, users will nudge new users and advise them on how things are done. That’s a challenge if a community grows fast."

In the race to be the biggest, the major social networks have created an unnatural situation where people are not only granted the freedom and lack of inhibition that online interaction affords, but also feel entitled not to be censored at all. It has resulted in a situation where monitoring abuse would require a massive investment in thousands of skilled moderators who can not only delete abusive posts but communicate with users about why certain behaviour is unacceptable and try to create a culture of civility.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePapermint is now on Telegram. Join mint channel in your Telegram and stay updated

Close
×
My Reads Redeem a Gift Card Logout