The lawyers had investigated Facebook’s policies, how it moderates content, what it allows to be advertised, the algorithms it has developed and uses, and how it operated during elections. They wrote, “Many in the civil rights community have become disheartened, frustrated and angry after years of engagement where they implored the company to do more to advance equality and fight discrimination, while also safeguarding free expression."
As the US heads for an exceptionally contentious election this November in a climate of the pandemic and protests over police brutality and racism, the report has gained urgency.
Given Facebook’s global footprint, it also has wider resonance. In the recent past, there have been attempts by many governments across the globe to curb Facebook and other social media platforms. In India, too, rules to regulate social media are in the works. India is the largest market for Facebook’s WhatsApp, which has been accused of spreading fake news and rumours.
More recently, hundreds of alarmed advertisers have supported a call by a campaigning group that they stay away from contentious platforms, and have suspended advertising on Facebook in July; a few, for the rest of the year. Facebook believes they will return.
Facebook has been accused of allowing hate speech to flourish, but the social media giant says it believes in free speech and acts responsibly. At a speech at Georgetown University last year, Facebook CEO Mark Zuckerberg said he would protect free speech at all costs.
Murphy and Cacace noted that but wrote, “Elevating free expression is a good thing, but it should apply to everyone," pointing out that politicians like US President Donald Trump get considerably more leeway about what they can say compared to everyone else, which privileges the voices of the powerful over those without power.
Leaders like Trump pose a particular problem because their remarks often vilify specific groups. Figuring out when speech turns from being critical to hate speech is not an exact science and depends on context, and even seasoned lawyers can disagree about whether specific remarks constitute hate speech.
Balancing the rights of the speaker and the rights of those who say they are affected by the speech is hard enough for governments; expecting a publicly-traded company run by technologists and driven by profit to do it may be unrealistic.
In response to the audit report, Sheryl Sandberg, Facebook’s chief operating officer, said the report showed the “beginning of the journey, not the end," and admitted that the company has a long way to go. Its critics remain unconvinced.
Free speech vs hate speech
Global concern over Facebook has sharpened since the Cambridge Analytica scandal, where the UK-based company was accused of manipulating electoral outcomes by gaming Facebook’s algorithms; allegations of Facebook facilitating Russian manipulation of the 2016 US presidential elections; and UN criticism of Facebook enabling hate speech that led to crimes against humanity in Myanmar targeting the Rohingyas.
To be fair, in recent months, Facebook has hired leading human rights practitioners to guide the company. It has also constituted a 20-member “supreme court" of advisers to arbitrate over disputed content. Facebook has also banned advertisements that include hate speech and taken aggressive steps to weed out disinformation. It supports fact-checking groups and flags content that it considers inappropriate. A week ago, it took down more than 100 fake accounts that were linked to Roger Stone, the Republican adviser who was convicted last year for witness tampering, but who won’t serve jail time because Trump commuted his punishment this month before the sentencing could be determined.
Meanwhile, the power of tech companies has grown enormously and antitrust concerns are growing. On 27 July, four of the senior-most executives in the industry—Jeff Bezos of Amazon, Tim Cook of Apple, Mark Zuckerberg of Facebook, and Sundar Pichai of Alphabet (which owns Google and YouTube), will appear at an antitrust investigation into the companies’ collective market power before the House Judiciary Committee at the US Congress. On Friday, the Federal Trade Commission said it would ask Zuckerberg and Sandberg to depose over antitrust charges.
Beneath the free speech argument put forth by social media firms is a practical business motive.
When users have unrestrained free speech, they reveal more about themselves, and Facebook’s sophisticated technology can analyse the data to understand their interests, fears, hopes, and desires better. It can then develop algorithms that help other businesses to tailor their advertising and marketing with incredible precision to target consumers, ensuring that the ad dollars are spent more efficiently.
But when people have the right to say anything, they do say anything. As anti-racism demonstrations were spreading across the US, on 29 May Trump said, “when the looting starts, shooting starts," invoking what the Miami police chief said in 1967 when the city experienced an outbreak of violent crime. Social media companies let the remarks remain on their platforms.
Zuckerberg later said it was not his job to monitor what politicians say. While Facebook insists it is not a news organization, it does use its discretion about what can or cannot appear on its platform (as do other platforms). Of course, a news organisation would be failing in its responsibility if it did not report a prominent politician saying something so controversial.
The social media companies say they are mere carriers, and rely on the protection that Section 230 of the Communications Decency Act (1996), or the CDA, provides, which absolves them from any liability of what’s said on the platform. Described as “the twenty-six words that created the Internet," the clause says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
The Electronic Frontier Foundation, a non-profit organization defending civil liberties in the digital world, considers that provision to be the most valuable tool protecting freedom of expression. It gives companies a legal shield, protecting them from liability over user-generated content. In October 2019, Google’s global head of intellectual property policy, Katherine Oyama, told the US Congress that without the law, tech companies would face lawsuits.
Companies know they’re immune from charges if an irate user writes adverse reviews of a restaurant or an Airbnb apartment. Like highways, platforms let traffic flow; like news-stands, they say they only display and sell publications, they’re not responsible for what’s published.
But platforms do act like publishers because they have their terms of service to ensure that the content is addictive and doesn’t drive away other users. They do remove content, suspend or ban individuals, and their terms of services give the companies a wide leeway, but which few users bother to read.
Using those powers, Twitter temporarily banned conservative activist Candace Owens who asked her followers to defy Michigan’s stay-at-home directives during the pandemic. It has also permanently removed Katie Hopkins, a right-wing British reality TV personality who later wrote columns, known for shrill, fact-free diatribes against refugees and Muslims. It has also acted against liberal voices—in India, it recently blocked some tweets of Aakar Patel, former head of Amnesty International’s India office.
While most users can do little except fret, appeal, or vent against unilateral decisions taken by social media platforms, some users have greater powers— one such is Trump. After Twitter began posting warnings about the accuracy of some Trump tweets, and in late June, when Facebook removed a Trump ad that used Nazi imagery, Trump threw a tantrum and passed an executive order, ostensibly to uphold free speech, claiming that the companies were restricting Americans’ right to speak freely.
Trump said platforms were biased against Republicans, and he would “strongly regulate, or close them down." His executive order complained about “selective censorship" by online platforms on “inconsistent, irrational and groundless justifications." If his order became the law, it would limit the legal protections of the CDA to stop companies from acting in ways that are “unfair and deceptive."
The order will be challenged in courts. After all, the First Amendment applies to governments, not private individuals or companies, because individuals or companies cannot pass laws binding on all Americans. But social media platforms create the illusion of being public spaces while being private domains.
Advertisers have had enough because they think the platforms are not doing enough. Disney, Colgate-Palmolive, Adidas, Ben & Jerry’s, Boeing, Patagonia, Pfizer, Reebok, and Unilever are among the hundreds of companies that support a campaign called Stop Hate for Profit, and have suspended advertising on social media. Some companies say they may continue their boycott till the election cycle is over.
That may simply be a prudent economic decision in a shrinking economy of reduced demand and weakening consumer expenditure. Besides, as David Carroll who teaches media design at the Parsons School of Design has argued, the boycott may not amount to much because the vast majority of Facebook’s advertisers are small businesses (ad-spend accounts for 98% of the social media giant’s $70-billion revenue).
Of its 8 million advertisers, its top 100 advertisers account for only 6% of the ad revenue, and nearly three-quarters of Facebook’s revenues comes from small advertisers who are neither susceptible to boycott calls, nor likely to respond to such calls.
Regulatory change may be on the anvil. A new bill introduced in the Congress, called EARN-IT, is ostensibly meant to prevent online exploitation of children, and it would require platforms to monitor content. But some human rights groups worry that it would give sweeping powers to the government to disregard user privacy and turn the platforms into agents of the governments.
If the Congress passes any law that removes platform immunity, then platforms will impose far more restrictive rules, narrowing freedom of expression even further, and make the internet even more arbitrary.
In a report last year, David A. Kaye, who recently completed his term as the UN special rapporteur for freedom of expression, had urged the adoption of standards and consistent policies so that governments and companies align their policies and practices in line with international human rights law. Kaye said that online hate challenges everyone, in particular marginalized individuals.
International law is closer to the Indian Constitution than the US First Amendment—it is more restrictive of free speech in order to protect the rights and reputation of others, national security, public order, health, or morality.
Kaye says the new rules should meet the criteria of legality (i.e. the speech should be unlawful and not merely harmful or offensive), proportionality (i.e. decisions to take down material should be driven by due process and not through automated filters), and legitimacy (i.e. it should not curb dissent).
Companies should conduct periodic due diligence assessment and review of human rights impacts, improve remediation of cases where rights are infringed, create transparent and easily-accessible mechanisms for people to appeal platform decisions, and make decisions compliant with international standards.
Experience of the last decade has shown that hate speech and lies proliferate on the internet, and the speed with which information can spread is astonishing. In an essay in 1710, Jonathan Swift had written, “falsehood flies, and truth comes limping after it." Clearly, not much has changed in the three centuries since.
Salil Tripathi is a writer based in New York