Google DeepMind Unveils AI System to Discover Faster Algorithms

DeepMind Chief Business Officer Colin Murdoch says AI can help execute the same amount of computing using fewer resources.
DeepMind Chief Business Officer Colin Murdoch says AI can help execute the same amount of computing using fewer resources.


  • The new system, called AlphaDev, focuses on finding more efficient algorithms for software development

Researchers at Google DeepMind, the Alphabet-owned artificial-intelligence research lab, announced on Wednesday a new AI system that could make computing more efficient and sustainable.

The latest breakthrough, published in the scientific journal Nature, focuses on the discovery of faster computer algorithms, which are fundamental for software development and are used by companies trillions of times a day, DeepMind said.

The London-based AI lab, known for pioneering AI models such as AlphaFold and AlphaGo, which mastered the complex game of Go, calls its new AI system AlphaDev. Based on AlphaZero, an iteration of AlphaGo, the system uses reinforcement learning, a form of machine learning in which computers learn and develop strategies on their own, to discover faster algorithms for computer-science functions such as sorting and hashing.

Sorting algorithms are used to order data for things such as ranking web-search results and the back-end systems of financial institutions. Hashing algorithms convert data into a unique string of characters so users can find what they are looking for in things such as databases. Because they are so widely used by companies, making these algorithms faster could significantly reduce the resources needed for computing.

“It means that we can carry out the same amount of computing using much fewer resources," said Colin Murdoch, DeepMind’s chief business officer.

When applying AlphaDev to a C++ sorting library, the company said AlphaDev was up to 70% faster for smaller sorting tasks and 1.7% faster for large-scale sorting tasks. For hashing functions, AlphaDev discovered an algorithm that was 30% faster in the 9 to 16 bytes range. Both algorithms are available to developers in open-source libraries.

As part of DeepMind’s ongoing effort to make computer systems more efficient, it has worked with units across Google and Alphabet to apply AI systems similar to AlphaDev to optimize network resources, keep data centers cool and share computing resources across servers, according to Murdoch.

In trials, AI reduced the amount of underused hardware in Google’s data centers—meaning servers that weren’t being used to their full capacity—by up to 19%, the company said.

For businesses, the idea is that when “idle" computing isn’t being used, it’s wasted energy and money.

“If you can become more effective at allocating resources, it’s going to increase the velocity of your business, because you’re able to use digital resources that were otherwise tied up," Murdoch said.

One of DeepMind’s first applied initiatives, about four years ago, was a project to optimize YouTube’s video compression pipeline—essentially allowing users to watch videos with less data without sacrificing video quality, said Daniel Mankowitz, an AlphaDev lead researcher and DeepMind staff research scientist.

With the success of that project, the team of six researchers turned their attention to optimizing code, Mankowitz said. While optimization technology itself isn’t new—the math behind finding the best solution for resources with some number of constraints has been around for decades—it was DeepMind’s method of “imagining future possible efficient algorithm outcomes," that yielded a faster result for sorting algorithms than had been previously developed by engineers, Mankowtiz said.

Google in April merged its Brain and DeepMind research groups into one unit, led by DeepMind CEO Demis Hassabis. The search giant has aimed to accelerate its AI and generative AI efforts amid intense competition with Microsoft-backed OpenAI, maker of ChatGPT. In addition to work in the area of life sciences, Murdoch said DeepMind is focused on generative AI, both in developing large language models and helping businesses to deploy them.

“There’s still a lot more research to go to make [large language models] run as efficiently as they possibly can, both within the cloud, and perhaps ultimately on mobile phones and devices," Murdoch said. “That’s going to be a really big research question in the coming months."

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.



Switch to the Mint app for fast and personalized news - Get App