Select Language:
Anthropic is broadening its partnership with Google to include access to up to one million of the tech giant’s AI chips, valued in the tens of billions of dollars. This move aims to accelerate the development of its AI systems amidst fierce market competition.
As part of the new agreement announced on Thursday, Anthropic will gain over one gigawatt of computing power, which is expected to go live in 2026. This capacity will be used to train future versions of its Claude AI model on Google’s proprietary tensor processing units (TPUs), which were previously used exclusively internally.
Anthropic selected Google’s TPUs for their superior cost-to-performance ratio and efficiency, leveraging their existing experience in training and deploying Claude models with these processors.
This collaboration highlights the relentless demand for AI chips, as companies race to develop technology capable of matching or surpassing human intelligence. Google’s TPUs, available for rent via Google Cloud, offer an alternative to Nvidia’s supply-constrained chips and will also provide additional cloud services for Anthropic.
In comparison, OpenAI—maker of ChatGPT—recently signed numerous contracts that could cost over a trillion dollars to secure around 26 gigawatts of computing capacity, enough to power approximately 20 million households in the U.S. Industry insiders estimate that one gigawatt of computing capacity costs about $50 billion.
OpenAI has been actively using Nvidia’s GPUs and AMD’s AI chips to meet its increasing demand.
Earlier this month, Reuters exclusively reported that Anthropic expects to more than double, potentially nearly triple, its annual revenue run rate next year, driven by rapid adoption of its enterprise-focused products. The startup emphasizes AI safety and developing models tailored for corporate applications, fueling growth in related startups like Cursor.





