When it came out in May 2025, Karen Hao’s book Empire of AI: Inside the Reckless Race for Total Domination gave the world a new lens through which to look at the rise of Artificial Intelligence companies. In Hao’s framing, which occupies a central portion of the book along with tracing the growth of US-based OpenAI, modern tech and AI companies resemble the extractive colonial empires of the 18th and 19th centuries.
Hao, a former application engineer who has been following the rise of AI since its widespread commercial deployment in the form of Large Language Models (LLMs) over the past few years, has reported for publications like MIT Technology Review, The Wall Street Journal and The Atlantic. She believes that we need to start using a new language to bring perspective to the magnitude of the economic and political power held by technology companies like OpenAI.
In this conversation on the sidelines of the Bangalore Literature Festival last weekend, Hao spoke to Lounge about how she arrived at this framing and what the consequences of the unchecked rise of AI companies could mean for the world. Edited excerpts from the interview:
How did you arrive at this very interesting analogy in your work?
It was actually based on scholarship that I started discovering in 2019. There were two pieces of work that were particularly influential: One called Decolonial AI, a 2020 research paper that came out of (Google’s) DeepMind, and the other a book called The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism by Nick Couldry and Ulises A. Mejias, published in 2019. Both works were talking about the parallels between the AI industry—and the tech industry at large—and colonial empires. That was when I first started thinking about and reporting on AI from that perspective. When ChatGPT came out in the middle of me putting together a book proposal, it clicked that this story needed to be told through the story of OpenAI, which I had been following for some years.
While I was initially thinking about the entire AI and tech industry as one empire, it was my editor who pointed out that actually, each company is an empire that’s competing with the other—because a central part of empire building is that competition. And that made its way into the book as well.
In this analogy, what are the empires of AI colonising or extracting?
They’re extracting every resource that they can use to ultimately create profit and also perpetuate their particular ideology, which is trying to build what they sometimes colloquially call an “AI god”. That includes data, that includes land, energy, fresh water, talent, labour, intellectual property… it’s basically the same thing that the old empires did. They grabbed land, they grabbed cultural artefacts, they tried to impose their knowledge on other people after excavating the indigenous knowledge. Modern AI companies are also trying to excavate indigenous knowledge from all these corners of the earth in their pursuit of creating this AI god—for the purpose of creating a one-size-fits-all model that then projects their ideology on to the rest of the world.
What are the dangers of this one-size -fits-all model?
To use just one example, think about how it’s telling, say, history. First of all, it’s not even telling the history correctly. But even when it is recounting the facts, there is a particular interpretation that comes pretty much from the American perspective, because these models are trained on the internet and the internet is extremely American-dominated. When we read history written by humans, we are aware that there has been a global reckoning with the idea that the white man’s perspective is not the only perspective, right? But the global public does not have that same immune system response to AI-generated perspectives. We think AI is more fair and unbiased when, in fact, it’s actually just the white man in different clothing.
You point out that these tech companies now have more resources and money than some countries. What do you think the impact of this could be on the world order?
If we allow these companies to continue doing what they’re doing—and we can see how the current US administration is giving Silicon Valley free rein—then I think it accelerates and supercharges the democratic backsliding that’s happening around the world. Because, ultimately, it’s creating entities that are more powerful than any other nation-state and that do not have any respect for democracy. These are ultimately techno-authoritarian entities that believe in the idea of a few people at the top deciding what the future should be for billions of others.
There’s no accountability mechanism. Those billions of people that are affected do not have any way of pushing back. The world moved away from that system—we moved away from empires towards democracy. And we are now swinging back the other way.
How would you compare the AI wave to the rise of social media and the age of disinformation that it brought in its wake?
From a data perspective, if you think about Meta as an example, they were an internet company and now they’re an AI company. When Meta entered the generative AI race, they had already accumulated 4 billion user accounts worth of data. They did not even consider that to be table stakes for entering the AI race. There was a New York Times article that reported on how Meta executives were having conversations about acquiring publishing firms for scraping yet more data and loosening all of the data privacy and data restrictions that had been put in place after Cambridge Analytica. So, firstly, we’re talking about a completely different order of magnitude in terms of data alone.
It’s also a completely different order of magnitude in terms of physical infrastructure. The modern internet runs on data centres, so every year for the past 20 years we’ve had more and more data centres in the world—and yet, in the years before the Gen AI boom, US energy demand flatlined and European energy demand fell, even as they were the two regions with the most data centre expansion. In the generative AI era, we are now seeing the US having a historic rise in energy demand because of data centres. The EU’s climate goals and energy efficiency goals are being threatened by data centre expansion. And this difference is one of the hardest things to convey because these companies operate at an order of magnitude that the average person has never encountered. (Sam) Altman recently said that he wants to build $10 trillion worth of data centres—I think that’s a strategy to make it sound like we have always had these big numbers around. We haven’t. It’s completely unprecedented.
Maybe the Earth could have sustained the modern internet, but now we’re talking about needing 100 earths to sustain generative AI.
You also talk about how LLMs have become the dominant AI model over other, more focussed models and the consequences of this. How did LLMs become the dominant model? What prevented the other models becoming more important?
LLMs are a very provocative technology because as humans we operate in language, and to have an AI system that's able to appear as though it's speaking and responding and conversing? It is just very powerful, and that's why ChatGPT caught the world by storm. These technologies existed before, but OpenAI packaged it into something that appears to speak to people, it taps into people's psyche, it moves people. When you capture the public imagination, that also has a self-perpetuating effect and then that becomes the only technology that people want to exist.
As a result, these companies also want to lean into something that is going to capture the public imagination, because that is what's going to help them commercialize and get profit. They're going to lean into AI systems where they have the monopoly, as they have extraordinary amounts of data at their disposal that no one else does. So it makes sense that they would want to create large-scale data-driven systems. Also, they have an extraordinary amount of cash lying around for building these supercomputers, so that's why they're going to lean into these huge computationally intensive systems.
And then, on top of that they're trying to commercially develop this technology and they're going to go after things that are sexy to the public—not necessarily the most useful. At one point I was talking with this AI researcher who does AI and robotics stuff, he's a professor at MIT who is quite famous for building a cheetah robot—it's this robot that runs like a cheetah—and he was saying that when his grad students were working on trying to advance this robot with AI, they taught the system how to do backflips…because it played really well with the media, people love to see robots doing backflips, but from a technical perspective, it's not as hard as getting the robot to walk—because when it's walking, it goes over unpredictable terrain, that's way harder than a robot doing a backflip in place.
Similarly, these companies lean into spectacle because they want attention, they want PR, they know that that's also going to help them with government regulators, like if they can wow the government regulator and say this is innovation, then you ward off regulation. Basically, there's no technical reason why LLMs overtook all the other models— it's really financial, cultural, psychological, business and ideological reasons.
But there are smaller, specialised AI models that do very useful, specific work—like models that help document and save an endangered language, for instance, which I write about in the book, or climate AI models like Climate Change AI (a global non-profit that works at the intersection of climate change and machine learning).
