
Meta Platforms has struck a sweeping, multiyear agreement with Nvidia that will see the social media company deploy “millions” of the chipmaker’s processors across its artificial intelligence data centres, deepening a partnership that has helped define the industry’s modern AI boom.
The deal, announced on Tuesday, broadens Meta’s use of Nvidia hardware beyond graphics processing units, with the company set to become the first major operator to roll out Nvidia’s Grace central processing units as standalone chips at scale.
The partnership will also bring Meta early access to Nvidia’s next-generation Vera Rubin systems, as both companies race to build ever-larger computing clusters to power advanced AI models.
The expanded partnership arrives as Meta accelerates an infrastructure push that has startled investors with its scale. In January, the company said it could spend as much as $135 billion on AI in 2026. It has also pledged to invest $600 billion in the United States by 2028 on data centres and the physical infrastructure needed to run them.
“We’re excited to expand our partnership with Nvidia to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” Meta chief executive Mark Zuckerberg said in the statement.
Zuckerberg has repeatedly framed Meta’s AI strategy as a bid to bring advanced capabilities directly to consumers. He reiterated that ambition in July, describing a long-term push “to deliver personal superintelligence to everyone in the world”.
Financial terms were not disclosed, though analysts said the commitment is likely to be enormous given Meta’s projected capital expenditure.
“The deal is certainly in the tens of billions of dollars,” CNBC quoted chip analyst Ben Bajarin of Creative Strategies. “We do expect a good portion of Meta’s capex to go toward this Nvidia build-out.”
Nvidia said the agreement will include products from its current Blackwell generation and the forthcoming Vera Rubin design, securing Meta a sizeable supply at a time when demand for top-tier AI accelerators continues to exceed production.
Nvidia’s Blackwell GPUs have remained on back order for months, and Rubin recently entered production. With the new pact, Meta is positioning itself to scale rapidly while rivals scramble for capacity.
Meta already accounts for roughly 9% of Nvidia’s revenue, underscoring how much the chipmaker’s growth has become tied to a small group of mega-buyers building industrial-scale AI systems.
The most notable shift is Meta’s plan to deploy Nvidia’s Grace CPUs as standalone chips, rather than only as part of tightly integrated CPU-GPU systems.
Nvidia said this will be the first large-scale deployment of Grace CPUs on their own. The move also signals a more direct challenge to Intel and Advanced Micro Devices, which have long dominated general-purpose server computing.
“They’re really designed to run those inference workloads, run those agentic workloads, as a companion to a Grace Blackwell/Vera Rubin rack,” Bajarin said. “Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia’s putting across both sets of infrastructure: CPU and GPU.”
The next-generation Vera CPUs are planned to be deployed by Meta in 2027.
Meta has outlined plans for 30 data centres, 26 of which will be based in the United States. Two of its largest AI facilities are already under construction: the Prometheus 1-gigawatt site in New Albany, Ohio, and the 5-gigawatt Hyperion site in Richland Parish, Louisiana.
The sheer energy footprint of these projects has become part of the story. One gigawatt is roughly the amount of electricity needed to power 750,000 homes, and Meta’s largest planned site is several times that.
Nvidia’s hardware will sit at the centre of these facilities, linking vast banks of GPUs and CPUs into training and inference clusters capable of running frontier-scale models.
The partnership extends beyond processors. Meta will also use Nvidia’s Spectrum-X Ethernet switches, which connect GPUs across large AI data centres. The companies said engineering teams will work together “in deep codesign to optimise and accelerate state-of-the-art AI models” for Meta’s platforms.
Meta will also use Nvidia’s security capabilities in AI features on WhatsApp, according to the statement.
Ian Buck, Nvidia’s vice president of accelerated computing, said the two companies are not disclosing a timeline or a dollar figure. But he emphasised that Nvidia’s breadth of products — spanning chips, systems, networking and software — remains difficult for competitors to match.
“There are many different kinds of workloads for CPUs,” Buck said. “What we’ve found is that Grace is an excellent back-end data centre CPU,” meaning it handles the behind-the-scenes computing tasks.
Despite the expanded commitment, Meta has continued to test alternatives as it tries to reduce dependence on Nvidia, whose chips have become a bottleneck across the industry.
In November, Nvidia shares fell after reports that Meta was considering Google’s tensor processing units for its data centres in 2027. Meta also designs its own silicon and has used AMD chips — a relationship that drew attention after AMD secured a deal with OpenAI in October as AI companies seek second-source suppliers.
Still, Tuesday’s announcement is a clear signal that Meta is betting Nvidia will remain the dominant platform for cutting-edge AI infrastructure for years to come.
Meta’s AI strategy has been closely scrutinised by investors, particularly after the company’s ambitious spending projections triggered its worst trading day in three years in October. The stock later surged in January after Meta issued stronger-than-expected sales guidance.
The company is also working on a new frontier AI model dubbed Avocado, intended as a successor to its Llama technology. The latest release last spring failed to generate broad excitement among developers, CNBC previously reported.
For Nvidia, the Meta agreement is another demonstration of how its business has evolved from selling discrete chips to selling a full-stack AI computing platform — one that now extends deeper into the data centre than ever before.
For Meta, it is a bet that the fastest path to its consumer AI ambitions runs through the most expensive computing infrastructure Silicon Valley has ever built.
Sayantani Biswas is an assistant editor at Livemint with seven years of experience covering geopolitics, foreign policy, international relations and g...Read More
Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.