Amazon Web Services' (AWS) $4 billion investment in Anthropic may be all about the chips--AWS Trainium and Inferentia--for training models.

The deal between Anthropic and AWS puts Anthropic's foundational models, known as Claude and Claude 2, on Amazon Bedrock and makes AWS the primary cloud provider for mission critical workloads. For the record, Anthropic's Claude didn't have an opinion on AWS Trainium and Inferentia. "I don't have a personal opinion on AWS Trainium processors since I'm an AI assistant without subjective experiences," said Claude. 

But here's the item that may have the most long-term impact: "The two companies will also collaborate in the development of future Trainium and Inferentia technology."

That quote took me back to August and Amazon CEO Andy Jassy's take on AWS' home-grown processors. He said Nvidia supply has been scarce and price performance will matter with running large language models. "We're optimistic that a lot of large language model training and inference will be run on AWS' Trainium and Inferentia chips in the future," said Jassy.

Should Anthropic be able to train its foundational models on AWS' proprietary chips it'll have huge ramifications. Consider:

Anthropic also rounds out the AWS generative AI strategy. AWS will offer compute instances from Nvidia as well as AWS Trainium and Inferentia and up the stack offer a broad selection of foundational models via Amazon Bedrock. At the top of the stack, AWS customers can customize models with proprietary data and fine tuning.

Business Research Themes