Active Stocks
Fri Jul 19 2024 15:43:47
  1. Tata Steel share price
  2. 157.75 -5.17%
  1. NTPC share price
  2. 364.50 -3.51%
  1. Infosys share price
  2. 1,792.85 1.92%
  1. Power Grid Corporation Of India share price
  2. 332.20 -2.58%
  1. ITC share price
  2. 474.30 0.89%
Business News/ Technology / News/  Ex-OpenAI cofounder starts new company to prioritize safe creation of Superintelligent AI

Ex-OpenAI cofounder starts new company to prioritize safe creation of Superintelligent AI

Ilya Sutskever, ex-OpenAI cofounder, launches Safe Superintelligence Inc. with a focus on secure AI development. The venture aims to create superintelligent AI systems while avoiding commercial pressures. Sutskever emphasizes safety and security efforts, distancing from distractions faced at OpenAI.

Ilya Sutskever, Ex- co-founder, OpenAI. ((File Photo: Reuters))Premium
Ilya Sutskever, Ex- co-founder, OpenAI. ((File Photo: Reuters))

An ex-cofounder of OpenAI Ilya Sutskever, a renowned AI researcher, has announced the launch of a new venture dedicated to the secure development of artificial intelligence. 

Sutskever, who has departed the  OpenAI in May, revealed on Wednesday via social media that he has established Safe Superintelligence Inc. alongside co-founders Daniel Gross and Daniel Levy. This new enterprise aims to focus exclusively on the safe creation of "superintelligence"—AI systems surpassing human intelligence.

In a public statement, Sutskever and his partners have emphasized their commitment to avoiding the typical distractions of management and product cycles. They highlighted that their business model is designed to shield their safety and security efforts from short-term commercial pressures. Safe Superintelligence is rooted in both Palo Alto, California, and Tel Aviv, leveraging their strong connections in these regions to attract top-tier technical talent.

This announcement follows Sutskever's involvement in a failed attempt to oust OpenAI CEO Sam Altman last year, a move that has already sparked significant internal conflict regarding the balance between business pursuits and AI safety priorities at OpenAI. Sutskever has since expressed regret over the boardroom upheaval.

At OpenAI, Sutskever co-led a team working on the safe advancement of artificial general intelligence (AGI), AI systems with capabilities beyond human intelligence. Upon his departure, he hinted at a "very personally meaningful" project, the details of which remained undisclosed until now.

The resignation of Jan Leike, Sutskever's team co-leader at OpenAI, shortly followed Sutskever's exit. Leike criticized OpenAI for prioritizing product development over safety. In response, OpenAI formed a safety and security committee, although it predominantly comprises internal members.

Safe Superintelligence Inc. represents Sutskever's intent to address these safety concerns by focusing entirely on the secure development of superintelligent AI, free from the pressures that he believes compromised his previous work at OpenAI.

(With inputs from PTI)

3.6 Crore Indians visited in a single day choosing us as India's undisputed platform for General Election Results. Explore the latest updates here!

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
More Less
Published: 20 Jun 2024, 08:11 PM IST
Next Story footLogo
Recommended For You