AI employees fear they aren’t free to voice their concerns

More than a dozen current and former employees of OpenAI, Google’s DeepMind and Anthropic said AI companies need to create reporting channels for employees to safely voice concerns within their companies and to the public.  (AP)
More than a dozen current and former employees of OpenAI, Google’s DeepMind and Anthropic said AI companies need to create reporting channels for employees to safely voice concerns within their companies and to the public. (AP)

Summary

A group of current and former OpenAI and DeepMind employees said they want more whistleblower protections and fear retaliation.

A group of employees in the artificial-intelligence industry said they can’t voice concerns about AI’s threat to humanity because of confidentiality agreements, a lack of whistleblower protections and the fear of retaliation.

In a letter released Tuesday, more than a dozen current and former employees of OpenAI, Google’s DeepMind and Anthropic said AI companies need to create reporting channels for employees to safely voice concerns within their companies and to the public. They said confidentiality agreements block them from publicly discussing issues.

“The people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak," said former OpenAI employee William Saunders, who signed the letter.

In addition to Saunders, six other former OpenAI employees signed the letter. Four current OpenAI employees and one former and one current employee from Google’s AI research lab DeepMind also signed their names. Six of the signees were anonymous.

Three leading AI experts endorsed the letter: AI scientist Stuart Russell and Yoshua Bengio and Geoffrey Hinton, who are so-called godfathers of AI because of their early breakthrough research. Hinton left Google last year so he could more freely discuss the risks of the technology.

Hinton and others have been sounding the alarm in recent years over the ways AI could harm humanity. Some AI researchers believe the technology could grow out of control and become as dangerous as pandemics and nuclear war. Others are more tempered in their concerns but believe AI should be more regulated.

OpenAI said in response to the letter Tuesday that it agrees there should be government regulation.

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," an OpenAI spokeswoman said. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world."

DeepMind and Anthropic, which is backed by Amazon, didn’t immediately return requests for comment Tuesday.

OpenAI, a startup founded in 2015, released ChatGPT to the public in 2022. The chatbot became one of the most viral AI products, helping vault OpenAI to a multibillion-dollar company. Sam Altman, OpenAI’s leader and one of the architects of the AI revolution, has said he wants the technology to be developed safely.

News Corp, owner of The Wall Street Journal, has a content-licensing partnership with OpenAI.

The letter signees on Tuesday said current and former employees are among the few people who can hold the corporations accountable because there isn’t broad government oversight of AI companies. They said one of their concerns is that humans could lose control of autonomous AI systems that could in turn make people go extinct.

The signees are also asking companies to let employees anonymously report concerns, to not retaliate against whistleblowers and to not make them sign agreements that could silence them. They want AI companies to be more transparent and to focus more on safeguards.

OpenAI said Tuesday it has an anonymous integrity hotline and that it doesn’t release technology until it has created necessary safeguards.

Former OpenAI employee Daniel Kokotajlo, who signed the letter, said companies are disregarding the risks of AI in their race to develop the technology.

“I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence," he said. “They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood."

Write to Alyssa Lukpat at alyssa.lukpat@wsj.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS