OpenAI’s cash burn will be one of the big bubble questions of 2026
There is a dark side to the model-maker’s stunning growth
Stockmarket investors may have ended the year worried about the bubbly valuations of generative-AI firms. But private markets still appeared to be living in a parallel universe. In 2025 the venture-capital (VC) industry poured $150bn into big AI startups such as OpenAI and Anthropic, far more than beneficiaries of the previous VC boom received in 2021. Such is its confidence that OpenAI, maker of ChatGPT, believes it can single-handedly tap private investors for as much as $100bn in 2026. That would be almost four times the amount raised by the biggest stockmarket listing ever.
If anything, though, even private investors are likely to start asking tough questions. OpenAI, Anthropic and other San Francisco-based AI startups may have demonstrated some of the fastest revenue growth of any companies in history. But they have also burned through cash at “Towering Inferno" rates, as they spend on the chips and cloud computing needed to train and run their models. Both OpenAI and Anthropic will come under increasing pressure to spell out their paths to profit, especially as they consider going public in 2026 or shortly thereafter. For the AI industry in general, it will be a bracing, revealing experience.
Dig deeper
Several factors will draw investors’ attention to the lack of profitability. The first is the gigantic balance-sheets of the big-tech juggernauts that the labs are up against. Cash-gushers such as Google have bountiful resources to put behind their large language models, including their own chips and cloud infrastructure, making their models more efficient to train and run than those of OpenAI and Anthropic, and less reliant on potentially skittish investors. This mattered less when Google’s Gemini struggled to match the capabilities of the standalone model-makers. But now it has caught up.
That feeds into a second problem. More than three years since ChatGPT was launched, the much-vaunted boost to business productivity from AI is yet to come. In the few promising areas, such as coding and customer service, the field is becoming increasingly crowded between OpenAI, Anthropic, Microsoft and tailor-made applications that run on their own and third-party models. No AI lab has a moat big enough to retain an advantage for long, which makes revenues vulnerable.
A third problem is that costs are rising as fast as—or faster than—revenues. Unlike conventional software companies, which generate more profit the more they scale, AI firms face higher costs the bigger they get. A large portion of these costs comes from the computational power needed to train frontier models. Running models for inference is not cheap either, particularly when many users are not paying subscribers. That leaves the firms with some tricky decisions. It is possible to reduce inference expenses by offering short answers, or offset the cost by selling advertising. Both, though, risk degrading the user experience. If the model-makers raise prices instead, they could deter adoption.
Lots of companies have gone from cash-guzzling to cash-printing before. From Netflix to Uber, plenty of startups spent years in the red before generating vast returns. Generative AI could pay out even more, especially if superintelligence arrives. But investors will not wait for ever, and the industry’s star firms need to start fleshing out their business models.
OpenAI in particular should beware hubris. One vc says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030. Sam Altman, its boss, said recently that one reason he wants to take OpenAI public is to watch its doubters sell it short. “I would love to see them get burned on that," he said. Plenty of investors seem prepared to take the bet: both the public equity and debt markets have punished companies with significant exposure to his firm.

