IT IS ILLEGAL for Americans to export weapons without a licence. You may not FedEx a ballistic missile to Europe or post a frigate to Asia. But in the 1990s the country’s labyrinthine arms-export controls covered something more unusual: cryptographic software that could make messages unreadable to anyone other than the intended recipients. When American programmers built tools that could encode a newfangled message, the email, their government investigated them as illegal arms dealers. The result was Kafkaesque. In 1996 a court ruled that “Applied Cryptography”, a popular textbook, could be exported—but deemed an accompanying disk to be an export-controlled munition.
All that would later change. These “crypto wars” were won by the proponents of privacy and civil rights. End-to-end encryption has conquered the world, despite dogged efforts to ban or limit its use. Today civilians enjoy access to powerful encryption tools that would rival military cipher machines of the cold war. Secure messaging apps are used by soldiers in Ukraine—on both sides—and by teenagers swapping photos. Law-enforcement agencies argue that ubiquitous encryption has made it more difficult to detect and counter criminal activity, and that privacy should be weighed against public harm. Pro-encryption advocates retort that people have a fundamental right to private communication, and that secret backdoors in their apps and devices could be exploited by malefactors. The result is an intensifying battle involving governments, tech giants and civil-rights groups.
Although these tussles are not new—they began in earnest when a new form of cryptography appeared in the 1970s—they have entered a new stage. A decade ago more than half of email traffic and web browsing was unencrypted, which meant that anyone hoovering up that data—intelligence agencies or criminals—could read it. Many phone messages were sent via SMS, an insecure protocol. Now the vast majority of traffic is encrypted. In 2012 the number of daily messages sent on WhatsApp, an app now owned by Meta, overtook those sent by SMS. Today about 2.5bn people, nearly a third of the world’s population, use the service (see chart). Apple’s secure iMessage system has more than 1bn active users. A milestone was passed in December 2023 when Facebook Messenger, also run by Meta, with another 1bn users, introduced encryption by default.
The question is whether this is an unassailable trend or the high-water mark of encryption. On August 24th France arrested Pavel Durov, the CEO of Telegram, a Russian messaging app, on charges that included failing to provide intercepted messages on demand and supplying “cryptographic services” without approval. But Telegram, which denies wrongdoing, is more a social network than a secure communication app—messages are not encrypted by default and experts are scornful of its standard of security. Mr Durov would have been able to hand over plenty of data to the authorities if he had been so inclined. In most cases WhatsApp, iMessage and Signal, widely regarded as the gold standard among cryptographers, cannot hand over content even if ordered to do so.
Governments have been particularly exercised by Facebook’s move. The site was the last major repository of unencrypted and readable messages. As such it was long responsible for a large proportion of the child-sex-abuse images referred to authorities by tech companies. Once messages containing those images were encrypted, they became largely invisible to both Facebook and the authorities. In April a coalition of 15 law-enforcement agencies including America’s FBI and Interpol, an inter-governmental organisation, said that tech firms like Meta were “blindfolding themselves” to child-sex-abuse images. “Where the child-user base and risk is high,” they argued, “a proportionate investment and implementation of technically feasible safety solutions is paramount.”
The debate is largely over whether such solutions exist. Many authoritarian countries either ban or heavily restrict encryption. In most democracies the question is whether it can be tempered. In 2018 and again in 2022, Ian Levy and Crispin Robinson, both then senior members of GCHQ, Britain’s signals-intelligence service, published a pair of articles making the case for two approaches. The first was a “ghost protocol” in which, they suggested, messenger apps could insert government wiretappers as a secret participant in particular chats or calls, while suppressing a notification to the user that someone had joined the call. This would be “no more intrusive than the virtual crocodile clips” long used in traditional wiretaps, they argued.
The second proposal was a form of “client-side scanning”, whose purpose is to skirt around encryption rather than attack it directly. If a user is to view their data, it has to be decrypted at some point. In this window it can be automatically checked against a stored library of illegal material while still on the device. Both the content and the library would be compared as “hashes”, or unique digital fingerprints, rather than comparing image with image. “We’ve found no reason as to why client-side scanning techniques cannot be implemented safely in many of the situations society will encounter,” argued Mr Levy and Mr Robinson. In 2021 Apple said it would implement such a system on iPhones, but then quietly backtracked.
Many governments want technology companies to do more to explore such options. “A lot of these companies have dug themselves into a black and white, binary position,” says Rick Jones of Britain’s National Crime Agency. He acknowledges that privacy is important and that people need to communicate securely, but insists that solutions could be developed that would both preserve trust and protect children. “I’m not certain that we need to go all the way to having every platform that children use in their homes and bedrooms having a similar level of weapons-grade encryption. Why does a 13-year-old need that level of encryption?”
The Online Safety Act passed in Britain last year requires messaging platforms to use “accredited technology” to identify illegal content if it is deemed “necessary and proportionate” by Ofcom, a regulator. But this is largely symbolic: no such technology has been accredited. Others have gone much further. The European Union has proposed Chat Control 2.0, a client-side scheme that would compel email and messaging platforms to not only scan against a library of known child-sex-abuse material but to use artificial intelligence to flag other potentially illegal content for human review. And in August Sweden’s justice minister mooted blocking encrypted messaging apps to curb a surge in violent crime by gangs that use them to organise.
In India the government has demanded that messaging apps implement “traceability” through identifying the “originator” of messages—for instance, someone who starts a rumour—by including a “hash” of the message and author that can be tracked over time. The result has been a stand-off with WhatsApp, which says that the scheme would put encryption at risk by forcing the service to maintain large databases of personal messages, the content of which would be easier to decipher later. In April WhatsApp said that it would leave India if the courts insisted on traceability.
Mr Jones argues that tech companies, with a few exceptions he declines to name, have shied away from even considering the trade-offs. “What we’ve got is companies refusing to come to the table and even discuss it…I don’t think that is an acceptable position for them to adopt.”
The most prominent experts in the field, however, maintain that any tinkering with end-to-end encryption is unworkable at best and dangerous at worst. In “Bugs in Our Pockets”, a paper published in 2021, a group of 14 experts, including Whitfield Diffie and Ronald Rivest, a pair of cryptographers who in the 1970s laid the ground for the methods of encryption in widespread use today, set out a detailed case against client-side scanning.
One issue is how the algorithm used would tell apart an innocuous family bath photo from an illegal one. If the result was a flood of false positives, then moderators would end up having to view vast amounts of private data. Another objection is that such surveillance could become a slippery slope: a government that begins by scanning for child-sex-abuse images could repurpose the same software for a wider range of content. If the system relies on a central database of illegal content, perhaps one held by an international organisation, hackers or spies could covertly expand that list to search for other secrets.
Above all, the principle of an onboard surveillance tool inside every device carried by every person is at odds with the traditional principle that surveillance ought to be difficult—the cost of a single wiretap in America in 2020 was around $119,000, the paper’s authors pointed out. The “bulk scanning of everyone’s private data, all the time”, they warned, would undermine citizens’ trust in their devices, with a chilling effect on free speech and democracy.
Some critics argue that instead of scanning messages at scale, governments should take a more selective approach. Why not just hack devices of suspected criminals rather than sift through everything? The answer, say security officials, is three-fold. The first is that hacking phones and computers is difficult and resource-intensive—and becoming more so over time as an increasing proportion of data is encrypted not just while it is being sent but also when it is “at rest” (on the device) and “in use”. The second is that it is hard to know which devices and which content to target in the first place if everything is encrypted. The third, say insiders, is that hacking is ultimately more intrusive than passive scanning. “The irony”, says a former official, “is that what privacy campaigners are doing is driving more intrusive means…We’ll have to go back to bugging people’s laptops.”
In a speech in 2021, Ciaran Martin, a former GCHQ official, acknowledged the chasm separating two groups of people. On one side were officials, like his former colleagues, who wanted to balance governments’ right of lawful intercept with the wider benefits of end-to-end encryption—whether through developing ghost protocols, client-side scanning or other schemes, many of which have their roots in the first crypto wars. On the other were legions of cryptographers who argued that such tools could introduce fatal vulnerabilities to the security of encryption. Hoping they would not was “the digital-age equivalent of alchemy”. Mr Martin himself concluded that if no technical compromise could be found, “Then security must win and end-to-end encryption must continue and expand, legally unfettered, for the betterment of our digital homeland.”
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.