Active Stocks
Tue Apr 16 2024 15:59:30
  1. Tata Steel share price
  2. 160.05 -0.53%
  1. Infosys share price
  2. 1,414.75 -3.65%
  1. NTPC share price
  2. 359.40 -0.54%
  1. State Bank Of India share price
  2. 751.90 -0.65%
  1. HDFC Bank share price
  2. 1,509.40 0.97%
Business News/ Technology / News/  AI experts worried about Musk-led call for research pause.

AI experts worried about Musk-led call for research pause.

Four AI experts have voiced their concerns after their work was referenced in an open letter co-signed by Elon Musk, which called for an urgent pause in AI research.

In response to the open letter, civil society groups in the US and EU have urged lawmakers to limit OpenAI's research.Premium
In response to the open letter, civil society groups in the US and EU have urged lawmakers to limit OpenAI's research.

An open letter co-signed by Elon Musk, calling for an immediate pause in AI research, has cited the work of four AI experts who have expressed their concerns over the matter.

As of Friday, an open letter dated March 22 had received over 1,800 signatures, demanding a six-month pause in the advancement of AI systems that surpass the capabilities of OpenAI's GPT-4, which can perform tasks like holding human-like conversations, creating music, and summarizing lengthy documents.

Following the release of its predecessor, ChatGPT, last year, rival companies have been quick to launch similar AI products. The open letter argues that systems with "human-competitive intelligence" pose significant risks to humanity, citing research from 12 experts including university academics and current or former employees of Google, OpenAI, and its subsidiary DeepMind.

In response to the open letter, civil society groups in the US and EU have urged lawmakers to limit OpenAI's research. OpenAI has not yet commented on the matter. Critics of the letter's organising body, the Future of Life Institute (FLI), which receives funding primarily from the Musk Foundation, argue that it prioritizes hypothetical doomsday scenarios over more pressing issues with AI, such as programmed biases based on race or gender.

The open letter referenced several pieces of research, including "On the Dangers of Stochastic Parrots," a well-known paper co-authored by Margaret Mitchell, who formerly led ethical AI research at Google. Mitchell, now the Chief Ethical Scientist at Hugging Face, criticized the letter, telling Reuters that it was ambiguous what qualifies as an AI system "more powerful than GPT-4."

Margaret Mitchell's co-authors, Timnit Gebru and Emily M. Bender, took to Twitter to criticize the open letter, with Bender characterizing some of its assertions as "unhinged." Mitchell herself criticized the letter's approach, saying that by treating questionable ideas as fact, it creates a narrative on AI that benefits FLI supporters. "Ignoring active harms right now is a privilege that some of us don't have," she said. Meanwhile, FLI President Max Tegmark stated that the campaign was not an attempt to impede OpenAI's corporate advantage.

"It's quite hilarious. I've seen people say, 'Elon Musk is trying to slow down the competition,'" he said, adding that Musk had no role in drafting the letter. "This is not about one company."

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, expressed discomfort with her work being referenced in the open letter. Last year, she co-authored a research paper stating that the current widespread use of AI systems already poses significant risks. Dori-Hacohen's research argues that AI systems can impact decision-making regarding existential threats such as climate change and nuclear war.

She told Reuters: "AI does not need to reach human-level intelligence to exacerbate those risks."

"There are non-existential risks that are really, really important, but don't receive the same kind of Hollywood-level attention."

Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously.

"If we cite someone, it just means we claim they're endorsing that sentence. It doesn't mean they're endorsing the letter, or we endorse everything they think," he told Reuters.

Dan Hendrycks, director of the Center for AI Safety in California and one of the experts cited in the open letter, stood by its contents, stating that it was reasonable to consider black swan events - those that may seem improbable but would have catastrophic consequences if they occurred. 

The letter also cautioned that generative AI tools could be used to spread disinformation and propaganda on the internet. However, Shiri Dori-Hacohen criticized Elon Musk's involvement in the letter, citing reports of a surge in misinformation on Twitter after he acquired the platform. 

Civil society group Common Cause and others have documented this rise in misinformation. Twitter's upcoming changes to its research data fee structure may also impede research on the subject.

"That has directly impacted my lab's work, and that done by others studying mis- and disinformation," Dori-Hacohen said. “We're operating with one hand tied behind our back." Musk and Twitter did not immediately respond to requests for comment.


(With inputs from Reuters)

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
More Less
Published: 01 Apr 2023, 01:13 PM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App