AI whistleblowers are getting a chance to speak up: The world must listen

Tech companies should do more by making it easier for whistleblowers to sound the alarm.
Tech companies should do more by making it easier for whistleblowers to sound the alarm.


  • Tech companies should make it easier for whistleblowers to submit concerns to both their bosses and people outside the company who have the technical expertise to evaluate risks.

Here’s an AI advancement that should benefit all of us: It’s getting easier for builders of AI to warn the world about the harms their algorithms can cause—from spreading misinformation and taking over jobs to hallucinating and providing a new form of surveillance. 

But who can would-be whistleblowers turn to? A welcome shift towards better oversight is underway, thanks to changes in compensation policies, renewed momentum to speak out among engineers and the growing clout of a British government-backed safety group.

The financial changes are the most consequential. AI workers suffer from the ultimate rich-world problem, in that they can make 7-8 figures in stock options if they stick it out with the right company for several years, and if they also keep quiet about its problems when they leave.

Get caught speaking out, according to recent reporting by Vox, and they lose the chance to become millionaires. That has kept many of them silent, according to an open letter published this month by 13 former OpenAI and Google DeepMind employees, six of whom have remained anonymous.

OpenAI’s response to such complaints has been encouraging. It not only apologized, but said it would free most of its past employees from those non-disparagement requirements. Daniel Kokotajlo, a former OpenAI employee who admirably refused to sign the gag order and stood to lose $1.7 million (the majority of his net worth, as per The New York Times), will now be able to liquidate his shares and get that money, according to his lawyer, Lawrence Lessig.

Also read: Mint Explainer: Why the surging use of AI may challenge India’s power sector

The heartening development here isn’t that already-well-paid AI scientists are getting more money or protecting their lucrative careers, but that a powerful motivator for keeping silent is no more, at least at OpenAI. Lessig, who met with more than half a dozen former OpenAI employees earlier this year to hammer out a series of pledges that AI-building companies should make, wants at least one AI firm to agree to all of them.

That’s probably a tall order. But decoupling non-disparagement agreements from compensation packages is clearly a promising first step, and one that other Big Tech companies, who employ more than 33,000 AI-focused workers today, should follow if they don’t have such a policy in place already. Encouragingly, a spokeswoman for OpenAI-rival Anthropic says the company does not have such controversial gag orders in place.

Those companies should do more by making it easier for whistleblowers to sound the alarm. OpenAI has said it has a ‘hotline’ available to its engineers, but that doesn’t mean much when the line only goes to company bosses.

A better setup would be an online portal through which AI engineers can submit concerns to both their bosses and people outside the company who have the technical expertise to evaluate risks. Absent any official AI regulators, who should be that third party? There’s, of course, existing watchdogs like the US Federal Trade Commission and Department of Justice, but another option is Britain’s AI Safety Institute (AISI).

Bankrolled by the UK government, it’s the world’s only state-backed entity that has managed to secure agreements from eight of the world’s leading tech companies, including Alphabet’s Google, Microsoft Corp and OpenAI, to safety test their AI models before and after they’re deployed to the public.

Also read: India well positioned to leap ahead of developed worlds with AI: Sundar Pichai

That makes Britain’s AISI the closest equivalent to weapons inspectors in the fast-moving field. So far, it has tested five AI models from several leading firms for national-security risks.

The organization has 30 staff members and is in the process of setting up an office in San Francisco. It pays some senior researchers around £135,000 (about $170,000) a year, according to its latest jobs listings, far less than what a roughly equivalent role at Google’s headquarters in Mountain View, California, would pay (over $1 million in total compensation). Even so, the organization has managed to hire former directors of OpenAI and Google DeepMind.

It might seem awkward for Silicon Valley engineers to reach out to an organization overseas, but there’s no denying that the algorithms they’re fashioning have global reach. In the short term, the UK acts as a handy mid-point between the US and Europe, or even the US and China, to mediate concerns.

The mechanisms for whistleblowing still have some way to go in AI, but it’s at least a more viable option for the field than it ever was. That is a cause for celebration and will hopefully generate momentum for others to speak up too. ©bloomberg

Also read: AI: A daily companion—study highlights increasing adoption among Indians

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.