Anthropic said it will invest $50 billion in building its own AI data centers in a partnership with Fluidstack. The first data centers will be built in New York and Texas with more sites on deck.

The move comes after Anthropic announced it would use Google Cloud TPUs as well as AWS Trainium2 supercluster. Anthropic also uses Nvidia processors. The multi-cloud and multi-GPU approach was differentiated relative to OpenAI's spending spree on operating its own data center. Now Anthropic has decided that it has to roll its own AI infrastructure too.

According to Anthropic, the Fluidstack partnership will focus on custom-built infrastructure designed for the large language model provider's workloads and R&D.

Like most announcements covering AI infrastructure, Anthropic was sure to mention the project will create 800 permanent jobs and 2,400 construction jobs and play into US AI leadership. The data centers will power up throughout 2026.

Dario Amodei, CEO of Anthropic, said the company is getting closer to AI that can accelerate scientific discovery and solve complex problems. "These sites will help us build more capable AI systems that can drive those breakthroughs," he said.

For Fluidstack, the deal with Anthropic is a big win. Fluidstack counts Meta, Nvidia, Samsung, Dell, Honeywell and others as core customers.

Holger Mueller, an analyst at Constellation Research, said: "Clearly, Anthropic is charting a different course compared to OpenAI - the question is - what is the price for the flexibility? That is - how much does the portability need for Anthropic.  Hopefully it's not only a cost arbitration game."