Sam Altman says AI abuse risks need ‘more nuanced understanding’ as OpenAI seeks to hire Head of Preparedness

OpenAI plans to hire a Head of Preparedness to strengthen oversight of advanced AI risks, focusing on cybersecurity, mental health impacts, and misuse. The role will lead evaluations, threat modelling, and safeguards as AI systems become more powerful. Here's who can apply.

Updated28 Dec 2025, 06:56 AM IST
OpenAI has announced plans to hire a Head of Preparedness, signalling a sharper focus on managing the growing risks associated with increasingly capable artificial intelligence systems.
OpenAI has announced plans to hire a Head of Preparedness, signalling a sharper focus on managing the growing risks associated with increasingly capable artificial intelligence systems.

By Govind Choudhary

Govind Choudhary is a Senior Content Producer for Mint with over 04 years of experience covering technology and automobiles. He holds a Master's diploma in Mass Communication and Journalism from IGNOU and a bachelor’s degree in Mass Communication from Symbiosis International University. He has previously worked as a Correspondent for The Indian Express Group. He is also a passionate storyteller and an avid cinema enthusiast.

OpenAI has announced plans to hire a Head of Preparedness, signalling a sharper focus on managing the growing risks associated with increasingly capable artificial intelligence systems.

The role was revealed by OpenAI Chief Executive Sam Altman in a social media post on Saturday, where he said the company is entering a phase in which AI systems are not only more powerful but also pose new challenges across areas such as cybersecurity, mental health, and system misuse.

Rising concerns over advanced capabilities

Altman said recent developments have shown how advanced models can begin to identify security vulnerabilities and influence human behaviour in unexpected ways. While these systems bring significant benefits, he warned that their growing capabilities demand more structured oversight and deeper analysis of potential harm.

He noted that existing approaches to evaluating AI systems are no longer sufficient on their own, as models become more autonomous and capable of complex reasoning. According to him, the next phase of AI development requires a more detailed understanding of how such systems could be misused, and how safeguards can be designed without limiting legitimate applications.

“We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent; a lot of ideas that sound good have some real edge cases,” noted Altman in his tweet.

What are the responsibilities of the new Head of Preparedness?

In a separate blog post, OpenAI outlined the responsibilities of the new Head of Preparedness role. The position will lead the company’s Preparedness framework, which focuses on identifying, evaluating and mitigating risks linked to advanced AI systems.

The role involves building and overseeing capability evaluations, developing threat models, and ensuring that safety measures are technically sound and scalable. The person appointed will also coordinate work across research, engineering, policy and governance teams to ensure safety considerations are embedded throughout product development.

OpenAI said the role would involve guiding decisions on how and when new capabilities are released, as well as refining internal frameworks as new risks emerge.

Also Read | OpenAI and Anthropic double AI usage limits in holiday boost for developers

OpenAI focuses on high-risk domains

According to the company, particular attention will be paid to areas such as cybersecurity and biological risks, where misuse of advanced AI could have serious real-world consequences. The Head of Preparedness will be expected to assess how models behave in these domains and to help design safeguards that reduce the likelihood of harm.

The role also involves close collaboration with external partners and internal safety teams to ensure that evaluations remain relevant as technology evolves.

Also Read | ‘AI should remember your life’: Sam Altman on OpenAI’s next big bet

Who can apply for the role of OpenAI's Head of Preparedness?

Altman described the role as demanding, noting that it would involve making difficult decisions under uncertainty and operating in a fast-moving environment. The position requires strong technical expertise, experience in risk evaluation, and the ability to coordinate across multiple teams with differing priorities.

OpenAI said candidates with backgrounds in AI safety, security, threat modelling or related technical fields would be particularly well-suited, especially those comfortable balancing long-term safety concerns with the realities of rapid product development.

Key Takeaways
  • The role of Head of Preparedness is crucial for managing the risks associated with advanced AI systems.
  • OpenAI is recognizing the limitations of current AI evaluation approaches and emphasizing the need for structured oversight.
  • Collaboration across multiple teams is essential to ensure safety considerations are integrated into AI product development.
Get Latest real-time updates

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyNewsSam Altman says AI abuse risks need ‘more nuanced understanding’ as OpenAI seeks to hire Head of Preparedness
More