‘We will be outcompeted': Leading AI expert warns world may be running out of time to prepare for AI risks

AI safety expert David Dalrymple warns that rapid AI development may outpace safety measures. He says this could be risking destabilization of security and economy  and emphasized the need for better control of advanced AI systems

Aman Gupta
Updated4 Jan 2026, 10:27 PM IST
AI expert has warned about the rapid development of AI systems
AI expert has warned about the rapid development of AI systems(REUTERS)

David Dalrymple, a leading AI safety expert, has warned that the world “may not have time” to prepare for the safety risks posed by cutting-edge AI systems. Dalrymple, who works as a programme director and AI safety expert at the UK government’s Advanced Research and Invention Agency (ARIA), told The Guardian that the development of AI was moving “really fast” and that it can’t be assumed these “systems are reliable”.

Also Read | Redmi Note 15 India launch roundup: Launch date, expected price, specs and more

“I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better,” he told the publication.

“We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet,” Dalrymple added.

He went on to describe the consequences of AI’s progress getting ahead of safety as “destabilisation of security and economy.” The researcher highlighted that there is a need for more technical work on understanding and controlling the behaviour of advanced AI systems.

Also Read | Oppo Reno 15 Pro Mini vs OnePlus 13s: Which compact flagship could win you over?

“I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective,” Dalrymple said. “And it’s not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans.”

Humans are sleepwalking into this transition, says Dalrymple:

ARIA is publicly funded but is reportedly independent of the government. Dalrymple works on developing systems that can safeguard the use of AI in critical infrastructure like energy networks. He told the publication that governments should not assume that advanced AI systems are reliable.

Also Read | Instagram head warns AI-generated content will overwhelm feeds in 2026

“We can’t assume these systems are reliable. The science to do that is just not likely to materialise in time given the economic pressure. So the next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides,” Dalrymple said.

He also suggested that human civilisation is sleepwalking into this “high-risk” transition that is happening with AI.

“Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping. I am working to try to make things go better, but it’s very high risk and human civilisation is on the whole sleepwalking into this transition,” he said.

Dalrymple also went on to give a stark warning that AI systems would be able to automate a full day of research work and development by late 2026. This, he says, would lead to “a further acceleration of capabilities” because the technology would be able to self-improve on maths and computer science elements of AI development.

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyNews‘We will be outcompeted': Leading AI expert warns world may be running out of time to prepare for AI risks
More