Anthropic at $30 Billion Revenue: The TPU Deal Reshaping AI Infrastructure
On April 6–7, 2026, Anthropic announced two numbers that together tell the most dramatic growth story in enterprise software history: its annual revenue run rate has surpassed $30 billion — up from $9 billion at the end of 2025, a more than 3x increase in roughly three months — and it has signed a deal with Google and Broadcom for 3.5 gigawatts of next-generation TPU computing capacity starting in 2027. The announcement, confirmed in a Broadcom regulatory filing and Anthropic’s own blog post, marks a decisive inflection point for the company that most people outside of enterprise AI circles were still treating as OpenAI’s smaller, safety-focused competitor.
The Revenue Story: $9B to $30B in One Quarter
Anthropic’s revenue trajectory in early 2026 is not a normal enterprise growth curve. In February 2026, the company reported over 500 business customers each spending more than $1 million annually. By April 7, that number had exceeded 1,000 — more than doubling in fewer than two months. CFO Krishna Rao described demand as “exponential growth” requiring infrastructure commitments at a scale the company has never previously made. The Claude app has become the top free app in the US App Store — a position reached in part due to the “#QuitGPT” movement that followed OpenAI’s Pentagon deal in early 2026, but sustained by genuine product quality. Claude is currently the only frontier AI model available on all three major cloud platforms simultaneously: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.
The Infrastructure Deal: 3.5 Gigawatts of TPUs
The Broadcom-Google-Anthropic deal announced April 6 is the largest single compute commitment in Anthropic’s history. Beginning in 2027, Anthropic will access approximately 3.5 gigawatts of next-generation TPU-based AI compute capacity through Broadcom as part of multiple gigawatts it has committed to. For context: Broadcom CEO Hock Tan has stated publicly that Broadcom expects over $100 billion in AI chip revenue in 2027 alone, and Anthropic’s deal is one of the primary drivers of that estimate — with Mizuho analysts projecting $21 billion in Broadcom AI revenue from Anthropic in 2026 and $42 billion in 2027.
Anthropic is not dependent on Nvidia alone. It runs Claude across AWS Trainium chips, Google TPUs, and Nvidia GPUs simultaneously — a diversified hardware strategy that provides resilience and allows workload matching to the most cost-effective chip for each task. This multi-vendor compute approach is itself a competitive advantage: single-provider dependency on Nvidia GPUs creates supply chain risk that Anthropic has deliberately avoided. Amazon remains its primary cloud provider through Project Rainier, while the Google TPU deal adds substantial capacity for the 2027 model training cycle.
What $30B Revenue Means for the AI Market
Anthropic approaching OpenAI’s revenue scale ($25B annualized as of early 2026, per public reports) changes the competitive dynamics significantly. Two facts are now true simultaneously: OpenAI is no longer the only company with the revenue base to fund frontier model development at scale, and the enterprise AI market is large enough to support multiple $30B+ companies simultaneously. That second point matters more than the first. It suggests the enterprise AI market is larger than even the most optimistic projections from two years ago — large enough that Anthropic, Claude’s rapid adoption, and OpenAI’s ChatGPT can all grow substantially without taking meaningful share from each other.
The practical implication for enterprise buyers: the “Anthropic vs. OpenAI” framing for vendor selection is increasingly obsolete. Both companies are well-capitalized, both have frontier models, both have multi-year infrastructure commitments. The choice is now driven by use case fit, not survival risk. For coding tasks, Claude leads benchmarks. For professional document work, GPT-5.4 leads. For multimodal and cost efficiency, Gemini leads. All three are now viable long-term infrastructure choices.
📘 AI Engineering: Building Applications with Foundation Models
📘 AI for Business: A Practical Guide for Leaders
Anthropic a 30 Miliardi di Revenue: Il Deal TPU che Ridefinisce l’Infrastruttura AI
Il 6-7 aprile 2026, Anthropic ha annunciato due numeri che insieme raccontano la storia di crescita più drammatica nel software enterprise: il suo tasso di run-rate annuale ha superato i 30 miliardi di dollari — da 9 miliardi a fine 2025, un aumento di oltre 3x in circa tre mesi — e ha firmato un accordo con Google e Broadcom per 3,5 gigawatt di capacità di calcolo TPU di prossima generazione a partire dal 2027.
La Storia della Revenue: da 9B a 30B in un Trimestre
A febbraio 2026, l’azienda contava oltre 500 clienti business che spendevano ciascuno più di 1 milione di dollari all’anno. Al 7 aprile, quel numero aveva superato i 1.000 — più che raddoppiato in meno di due mesi. Claude è attualmente l’unico modello AI di frontiera disponibile su tutte e tre le principali piattaforme cloud contemporaneamente: AWS Bedrock, Google Cloud Vertex AI e Microsoft Azure Foundry.
Il Deal Infrastrutturale: 3,5 Gigawatt di TPU
A partire dal 2027, Anthropic accederà ad approximately 3,5 gigawatt di capacità di calcolo TPU di prossima generazione attraverso Broadcom. Gli analisti di Mizuho proiettano 21 miliardi di dollari di revenue AI di Broadcom da Anthropic nel 2026 e 42 miliardi nel 2027. Anthropic non dipende solo da Nvidia: esegue Claude su chip AWS Trainium, Google TPU e Nvidia GPU contemporaneamente — una strategia hardware diversificata che fornisce resilienza e permette l’abbinamento dei carichi di lavoro al chip più conveniente per ogni attività.