The companies that are looking to leverage artificial intelligence for competitive advantage are increasingly choosing to go custom. It's build over buy at massive scale.
Broadcom's fourth quarter earnings results were an eye opener for the industry after CEO Hock Tan laid out a few interesting tidbits. Broadcom is benefiting from XPUs, custom AI accelerators. Google's TPUs, which have emerged as a threat to Nvidia, account for a big chunk of Broadcom's revenue.
Tan also revealed that Anthropic is buying Google Ironwood TPUs, the latest generation. Some choice quotes:
- "Our custom accelerated business more than doubled year-over-year, as we see our customers increase adoption of XPUs, as we call those custom accelerators in training their LLM and monetizing their platforms through inferencing APIs and applications."
- "These XPUs, I may add, are not only being used to train and inference internal workloads by our customers, the same XPUs in some situations have been extended externally to other LLM peers, best exemplified at Google, where the TPUs use in creating Gemini, have also been used for AI cloud computing by Apple, Coherent and SSI as an example."
- "Last quarter, Q3 '25, we received a $10 billion order to sell the latest TPU Ironwood racks to Anthropic. And this was our fourth customer that we mentioned. And in this quarter Q4, we received an additional $11 billion order from the same customer for delivery in late 2026."
- "That does not mean our other two customers are using TPUs. In fact, they prefer to control their own destiny by continuing to drive their multiyear journey to create their own custom AI accelerators or XPU racks, as we call them. And I'm pleased today to report that during this quarter, we acquired a fifth XPU customer through a $1 billion order placed for delivery in late 2026."
The big takeaway is that custom is the thing right now. For AI workloads at scale, this build over buy conclusion isn't that surprising. Google's TPUs are gaining favor. AWS launched its Trainium 3 processor and outlined Trainium 4. These hyperscalers are going custom to optimize for costs and monetize as soon as they stand up data centers.
Tan said customers are choosing to go custom for multiple reasons, but price-performance is the big one. Rivian also noted that agility is a big factor. Rivian's custom AI processor enables it to get started on software well before the chip lands.
The move toward custom components for AI systems is notable, but the market is immature. When markets are young, you tend to build your own stuff. Ask Amazon and Google. The big question is whether this custom-all-the-time approach lasts. Tan provided a bit of history when asked about the future of the XPU.
He said:
"You is don't follow what you hear out there as gospel. It's a trajectory. It's a multiyear journey. And many of the players, and not too many players, doing LLM wants to do their own custom AI accelerator for very good reasons. You can put in hardware if you use a general purpose GPU, you can only do in software and kernels and software. You can achieve performance-wise so much better in the custom purpose-designed, hardware-driven XPU."
- Oracle Q2 mixed, says it will be neutral on GPUs
- AWS launches Graviton5 as custom silicon march continues
- AWS launches AI factory service, Trainium 4 with Trainium 4 on deck
- Google Cloud's Ironwood ready for general availability
- Anthropic to use Google Cloud TPUs as it diversifies capacity
Will that mean custom approaches will be dominant over time? Not at all. Tan said:
"Will that mean that over time, they all want to go do it themselves? Not necessarily. And in fact, technology in silicon keeps updating, keeps evolving. And if you are an LLM player, where do you put your resources in order to compete in this space, especially when you have to compete at the end of the day against merchant GPUs who are not slowing down in the rate of evolution. I see that as this concept of customer tooling is an overblown hypothesis, which frankly, I don't think will happen."
These comments are notable if you expand it to broader enterprises. My take:
- Buy over build makes a lot of sense right now for enterprises, not necessarily at the hardware stack. If you can use AI to code and transform it's possible that you don't need to pay your SaaS tax. As for hardware, you’ll consume custom compute from cloud providers.
- Agentic AI interfaces could relegate a lot of your applications to plumbing. See: The enterprise LLM questions you should be asking | Agentic AI: Is it really just about UX disruption for now?
- OpenAI and Anthropic see this trend and are increasingly tapping into enterprise processes. See: AI agents, automation, process mining starting to converge
- Vendors will tell you repeatedly that building your own systems is a fool's errand, but if the focus is on process the strategy makes sense. However, Tan noted repeatedly that the custom route is a multiyear journey. The same multiyear approach matters for software too.
- In the end, enterprises want to control their own destinies and be agile. Locking in to any one vendor means you have no leverage. This fact applies to your data layer too and vendors like Databricks and Snowflake. See: AI strategies and projects: The hope, the fear and everything in between
- Enterprises are likely to think about custom apps because they're tired of SaaS costs rising as much as health care costs. Perhaps the suite always wins, but that phase in the AI app market may not arrive for years.
Related:
- Among CxOs, SaaS platform fatigue setting in
- The big AI, SaaS, transformation themes to watch in 2025’s home stretch
- LLM giants need to build apps, ecosystems to go with the models
- Pondering the future of enterprise software
