Data center infrastructure companies say there will be a spike in demand for AI-optimized hardware, but it'll take time to develop as many customers are working on their generative AI plans and determining what workloads will be on-premises, edge, or cloud.

Earnings conference calls from NetApp, Pure Storage, HPE and Dell Technologies provided insights into AI workloads and how they are shaping data centers. Here are some key themes across those companies.

Enterprises are tightening budgets, but spending on hardware enables transformation and drives productivity. NetApp CEO George Kurian said:

"Even as customers are tightening their budgets in response to the macro, they are not stopping investments in applications and technologies that drive business productivity and growth. Digital transformation projects involving business analytics, AI, data security, and application modernization, both on premises and in the cloud remain top priorities for IT organizations."

HPE CEO Antonio Neri added:

"In our HPC & AI’s business, we saw a significant sequential increase in orders this quarter, with a noteworthy uptick across customer segments from Fortune 500 companies including a large cloud provider to digital native start-ups looking for optimized AI supercomputing solutions."

Data centers will move to flash storage since there will be no such thing as cold data when training AI models. Charlie Giancarlo, CEO of Pure Storage, said:

"The days of hard disks are coming to an end. We predict that there will be no new hard disks sold in 5 years...We expect our leading role in AI to continue to expand, but we are equally excited that the requirements for Big Data will drive even more use of high-performance flash for traditional bulk data."

Giancarlo added that Pure Storage's FlashBlade//E enables it to compete for secondary tier storage as well as low-tier storage, which has been dominated by hard drives.

Hardware is likely to be upgraded for generative AI, but customers are just figuring out their strategies. Yes, Nvidia is busy selling chips to hyperscalers, but enterprises are evaluating plans. Giancarlo said:

"Every company is looking at large language models, ChatGPT, et cetera, trying to determine exactly what it means for them. We've seen some interest in that area, but it still remains a minority. The majority being traditional, much -- if I can use that word, with AI traditional AI projects. But we're most excited by is both the opportunity for a high-performance FlashBlade systems."

Kurian said:

"Today's environments are not the advanced LLM model, the majority of the business we see today are really around re-platforming from Hadoop to more modern environments as well as the use of advanced neural networks.

We see the impending onslaught of ChatGPT and tools like that, where customers will take the OpenAI or open-source generative AI model, but then build it on top of their own data sets, which require the storage that we have."

Those comments were echoed by Dell Technologies co-Chief Operating Officer Jeff Clarke. He said:

"What customers are trying to do is to figure out how to use their data with their business context to get better business outcomes and greater insight to their customers and to their business.

And while there's a lot of discussion around these large, generalized AI models, we think the more specific opportunity is around domain-specific and process-specific generative AI, where customers can use their own data. The datasets tend to be smaller. Those datasets then can be trained more quickly, and they can use their business context to help them inform and run their businesses better."

AI workloads and systems are greenfields for new architectures. Giancarlo said:

“AI systems are typically greenfield. So, we're not generally replacing. What we are competing with are solely all-flash systems. Hard systems just can't provide the kind of performance necessary for a sophisticated AI environment.

Of course, you still have hard disk systems in there for some analytics environments, where the performance is not generally as required. But for anything that's machine learning or real-time AI-oriented, it's only all-flash systems."

AI will mean more inference on the edge. HPE has seen strong growth for its intelligent edge unit, which was bulked up the acquisition of Aruba in 2015. Neri said:

"I consider AI a massive inflection point, no different than Web 1.0 or mobile in different decades. But obviously, the potential to disrupt every industry, to advance many of the challenges we all face every day through data insights, it's just astonishing. And HPE has a unique opportunity in that market because ultimately, you need a what I call a hybrid AI strategy.

You need strong inference at the edge. And that really comped by being able to connect and process data, whatever is created with very efficient and low carbon footprint, meaning sustainable solutions with lower power consumption. And then on the other side, you need a training environment where you take some part of the data, where you can train for different needs, different models for different type of used cases."

Dell Technologies co-Chief Operating Officer Chuck Whitten said on the company's first quarter earnings conference call:

"Customers, enterprises, are broadly pursuing and experimenting with AI efforts right now. They're doing it on premises and at the edge. Demand for our XE9680, that's our 16G and first-to-market purpose-built AI server with eight NVIDIA H100 or A100 GPUs has been very good, but we're also seeing demand across our portfolio. It's not simply the specialized eight-way GPU servers that can run AI, not everything needs billions of parameters."

Whitten added that "excitement for AI applications is ahead of GPU supply" and AI-optimized servers are a small part of the overall mix. In other words, the interest in AI-optimized infrastructure is there, but it will take time to flow to the bottom line.

Generative AI guide: ChatGPT: Hype or the Future of Customer Experience