Active Stocks
Mon Sep 25 2023 09:40:33
  1. Tata Steel share price
  2. 127.4 0.55%
  1. ITC share price
  2. 440.85 -0.53%
  1. Wipro share price
  2. 417.75 -0.21%
  1. NTPC share price
  2. 238.55 0.25%
  1. Tata Motors share price
  2. 623.05 0.31%
Business News/ News / India/  'AI poses existential threat', warn OpenAI CEO, top Microsoft tech titans

Science and technology leaders including Sam Altman, the CEO of OpenAI, as well as high-ranking executives from Microsoft and Google, issued a fresh cautionary message stating that Artificial Intelligence presents a potential threat of causing the extinction of humanity.

Taking to Twitter Centre for AI Safety tweeted, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," they said in a statement posted on the Center for AI Safety's website.

In March 2023, billionaire Elon Musk, Apple co-founder Steve Wozniak, renowned author Yuval Noah Harari, along with approximately 1,120 researchers and scientists, signed an open letter titled 'Pause giant AI experiments.' The objective of this letter was to urge laboratories to temporarily halt their post-GPT4 AI experiments for a minimum period of six months.

The recent statement has garnered signatures from renowned figures such as Geoffrey Hinton, a highly regarded computer scientist known as the pioneer of artificial intelligence, Demis Hassabis, the CEO of Google DeepMind, Ilya Sutskever, co-founder and chief scientist of OpenAI, and numerous other prominent individuals in the field.

The statement noted that AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI.

“Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion," it said.

The latest warning was intentionally succinct — just a single sentence — to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them, said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move, told Associated Press.

As stated on the Center for AI Safety's website it is also meant to create common knowledge of the growing number of experts and public figures who take some of advanced AI’s most severe risks seriously.

Earlier this month, Musk said the letter was futile. "I knew it’d be futile. I just wanted to call it – it’s one of those things. Well, for the record, I have recommended that we pause. Did I think we would – there would be a pause? Absolutely not," he told CNBC in an interview on May 16.

Unlike the previous warning, the recent statement does not put forward specific solutions or remedies. However, some individuals, including Sam Altman, have suggested the establishment of an international regulatory body similar to the United Nations nuclear agency as a potential approach.

Critics argue that the alarming predictions made by AI creators regarding existential risks have exaggerated the capabilities of AI and diverted attention from the pressing need for immediate regulations to address the tangible issues arising from their use.

(With inputs from agencies)



"Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!" Click here!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Updated: 31 May 2023, 06:58 PM IST
Next Story
Recommended For You
Switch to the Mint app for fast and personalized news - Get App