Results

AWS aims to make Amazon Bedrock your agentic AI point guard

AWS is expanding Amazon Bedrock to enable multi-agent collaboration to address higher complexity tasks.

With multi-agent collaboration, Bedrock will use models as a team with planning, structure, specialization and parallel work.

Collaboration and orchestration of agentic AI will be a big theme in 2025 and vendors are trying to get ahead of agent coordination before enterprises implement at scale. First, the industry may have to agree on standards so AI agents can communicate.

Speaking during his AWS re:Invent 2024 keynote, CEO Matt Garman said:

"If you think about hundreds and hundreds of agents all having to interact, come back, share data, go back, that suddenly the complexity of managing the system has balloons to be completely unmanageable.

Now Bedrock agents can support complex workflows. You create these series of individual agents that are really designed for your special and individualized tasks. Then you create this supervisor agent, and it kind of acts like the thing about it as acting as the brain for your complex workflow. It ensures all this collaboration against all your specialized agents."

Amazon CEO Andy Jassy said Alexa's overhaul will be powered by some of Bedrock's orchestration tools. He said:

"We are in the process right now rearchitecting the brains of Alexa with multiple foundation models. And it's going to not only help Alexa answer questions even better, but it's going to do what very few generative AI applications do today, which is to understand and anticipate your needs, actually take action for you. So you can expect to see this in the coming months."

Jassy said the Bedrock capabilities are all about model choices. He said Amazon uses a lot of Anthropic's Claude family of models but also leverages Meta's Llama. "Choice matters with model selection," said Jassy. "It's one of the reasons why we work on our own frontier models." 

AWS launched a series of models called Nova. Think of Nova as the LLM equivalent of what Amazon is doing with Trainium. 

Amazon Bedrock will also get the following:

  • Intelligent prompt routing, which will automatically route requests among foundation models in the same family. The aim is to provide high-quality responses with low cost and latency. The routing will be based on the predicted performance of each request. Customers can also provide ground truth data to improve predictions.
  • Model distillation so customers can create compressed and smaller models with high accuracy and lower latency and costs. Customer can distill models by providing a chosen base model latency and training data.

  • Automated reasoning check, which will validate or invalidate genAI responses using automated reasoning and proofs. The feature will explain why a generative AI response is accurate and inaccurate using provably sound mathematically techniques. These proofs are based on domain models from regulators, tax law and other documents.
  • New models from Luma AI, a specialist in creating video clips from text and images, and Poolside, which specializes in models for software engineering. Amazon Bedrock has also expanded models from its current providers such as AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability.ai and Amazon.
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS revamps S3, databases with eye on AI, analytics workloads

AWS outlined a series of improvements to its S3 service to manage metadata automatically, leverage Apache Iceberg tables and optimize for analytics workloads with Amazon S3 Tables. Also on the data front, AWS moved to reduce latency for its databases.

At AWS re:Invent 2024, CEO Matt Garman said services like S3 and Amazon Aurora DSQL are designed to set up enterprises to make data lakes, analytics and AI more seamless. "We'll continually optimize that query performance for you and the cost as your data lake scales," said Garman.

Garman's storage and data talk featured JPMorgan Chase CIO Lori Beer to talk about how the bank is using AWS for its data infrastructure. The upshot is that AWS is aiming to enable its enterprise customers to set up data services for AI. "Our goal is to leverage genAI at scale," said Beer.

Here's the rundown of the storage and database enhancements at AWS.

  • S3 will automatically generate metadata when captured as S3 objects. This service is in preview and data will be stored in managed Apache Iceberg tables. This move sets S3 up to improve inference workloads and data sharing with services like Bedrock.
  • Amazon S3 Tables will provide storage that's optimized for tabular data including daily purchase transactions, sensor data and other information.
  • AWS retooled its database engine. Aurora DSQL is designed to be the fastest distributed SQL database that handles management, delivers low latency reads and writes. Aurora DSQL is also Postgres compatible.
  • DynamoDB will also get global tables and the same low latency setup.
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

AWS scales up Trainium2 with UltraServer, touts Apple, Anthropic as customers

AWS launched new instances based on its Trainium2 processor, which offers 4 times the performance of Trainium1 with twice as more energy efficiency. AWS also prepped for larger training workloads with Trainium2 UltraServers that will be pooled into a cluster.

The cloud giant also set the table for AWS Trainium3.

Trainium2 provides more than 20.8 petaflops of FP8 compute, up from 6 petaflops with the first Trainium. EFA networking in Trainium2 is 3.2 Tbps, up from 1.6 Tbps and HBM is 1.5 TB, up from 512 GB. Memory bandwidth for Trainium2 is 45 TB/s, up from 9.8 TB/s.

AWS CEO Matt Garman said at the re:Invent 2024 keynote that Adobe, Poolside, Databricks, Qualcomm and Anthropic were among the companies working on Trainium2 instances.

Garman was also joined on stage by Apple, who is working on Trainium2 for training workloads. He said:

"Effectively, an UltraServer connects four Trainium2 instances, so 64 Trainium2 chips, all interconnected by the high-speed, low-latency neural link connectivity. This gives you a single ultra node with over 83 petaflops of compute from this single compute node. Now you can load one of these really large models all into a single node and deliver much better latency, much better performance for customers without having to break it up over multiple nodes."

AWS was early to using custom silicon for training and inference and is looking to provide less expensive options than Nvidia. Apple is already using Trainium and Inferentia instances to power its own models, writing tools, Siri improvements and other additions.

The cloud giant's product cadence is designed in part to enable customers to easily shift training and inference workloads to optimize costs.

More:

Garman said that AWS is stringing together Trainium instances in a cluster that will be able to provide compute for the largest models. Apple said it is currently evaluating Trainium2.

AWS' custom silicon strategy also revolves around Graviton instances as well as Inferentia. Customers on a panel at re:Invent highlighted how they were using AWS processors.

Although AWS has custom silicon, it is also rolling out instances based on other chips. Garman noted that AWS would launch new P6 instances on Nvidia's Blackwell GPUs with availability early 2025. AWS is also launching new AMD instances.

The bet is that AWS’ custom silicon will ultimately yield better price and performance for enterprise AI.

Constellation Research analyst Holger Mueller said:

"AWS is making good progress building more powerful combinations of its Trainum chips. It is showing that they have solved potential heating, electromagnetic interference and cooling issues. You can expect that Trainium will be scaled to 64 and potentially 128 chips per instance. But it all needs to be put in perspective as Google Cloud is on version 6 of its custom silicon. The announcement puts Amazon ahead of Microsoft." 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Zscaler plays platformization game amid strong Q1, mixed outlook for Q2

Jay Chaudhry, CEO of Zscaler, said the company's integration with competing cybersecurity platforms such as CrowdStrike, generative AI upsells and new executive additions will fuel growth in future quarters.

"In my scores of customer conversations, CXOs are prioritizing zero trust security and AI for their IT spending. We are fighting AI with AI. We recently delivered several AI innovations and are continuing to expand our AI portfolio," said Chaudhry.

The outlook for the second quarter, however, fell short of expectations.

Zscaler's first quarter results were better than expected. The company reported first quarter non-GAAP earnings of 77 cents a share on revenue of $628 million, up 26% from a year ago. Wall Street was expecting non-GAAP earnings of 63 cents a share on revenue of $605.55 million. On a GAAP basis, Zscaler reported a net loss of $12.1 million, or 8 cents a share.

As for the outlook, Zscaler said its second quarter revenue will be between $633 million to $635 million with non-GAAP earnings of 68 cents a share to 69 cents a share. Wall Street was expecting second quarter earnings of 70 cents a share on revenue of $633.8 million.

For fiscal 2025, Zscaler projected revenue of $2.62 billion to $2.64 billion, up from its $2.6 billion to $2.62 billion range.

Chaudhry said Zscaler is focused on using AI to secure applications, enable enterprise usage of generative AI copilots for security and provide visibility across cloud and on-prem environments. He added that Zscaler was seeing larger deals due to genAI upsells with ZDX Copilot and automation.

Yes, Zscaler is also playing the platformization play too like Palo Alto Networks and CrowdStrike. Chaudhry said:

"To make up for the flawed architecture, legacy security vendors are offering disjointed point products under the pretext of a platform. This increases cost and complexity for customers. A Fortune 50 retail customer recently told me that a legacy firewall vendor sold them a so-called platform. And when they tried to implement it, they found that it was nothing more than consolidated billings. Complexity is the enemy of security and resilience. No wonder so many enterprises are getting breached despite spending billions of dollars on so-called SASE security, which is nothing more than virtual firewalls and VPNs in the cloud.

The sooner organizations move away from these disjointed security solutions to Zero Trust, the sooner they will become secure and resilient."

Chaudhry said the company's Chief Revenue Officer Mike Rich, a ServiceNow alum, has moved the company to account-based selling and has improved the pipeline since joining the company a year ago. Zscaler has also ramped sales hiring and cut attrition.

The company also recently hired Adam Geller to be Chief Product Officer. Geller was previously at Exabeam and Palo Alto Networks.

Zscaler was also upbeat about securing Office 365 and Microsoft Copilot implementations.

So, what's the problem with the outlook? Zscaler said CIOs are still scrutinizing large deals. However, the guidance is likely to be a bit conservative. Chaudhry said:

"We are seeing interest in cyber that can really reduce the chance of ransomware attacks and the like. So that's where Zero Trust comes in. And then the CIO often will say: 'I like your cyber method. But if you can reduce my cost and complexity I'm doubly motivated.' We have combined the need for Zero Trust and now AI becomes a further catalyst with cost and complexity reduction, which is helping us because most companies can't do cost reduction."

 

 

Digital Safety, Privacy & Cybersecurity Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

AWS outlines new data center, server, cooling designs for AI workloads

Amazon Web Services said it will deploy simplified electrical and mechanical designs, liquid cooling, new rack designs and updated control systems to handle AI workloads sustainably.

The news, outlined at re:Invent 2024 in Las Vegas, landed ahead of CEO Matt Garman's keynote on Tuesday. AWS said the new flexible data center components will enable it to provide 12% more compute power while boosting availability and efficiency.

More re:Invent 2024:

AWS, like other hyperscale data center operators, is revamping designs and offering custom silicon to become more efficient to handle AI workloads and hit sustainability goals. AWS said the components will be modular and retrofit existing infrastructure. These additions will also support GPU-based servers, which will require liquid cooling.

Here's a look at the changes:

  • Simplified electrical distribution systems that minimize downtime and the number of racks impacted by electrical issues by 89%. AWS said it has reduced the number of failure points by 20%. AWS also brought backup power closer to the rack and reduced the number of fans.
  • AWS added configurable liquid-to-chip cooling in new and existing data centers. Updated systems will integrate air and liquid cooling for AI chips including AWS Trainium 2 and Nvidia GB200.
  • The company changed how it positions racks in a data center and optimized for high-density AI workloads. Software additions will predict the most efficient ways to place servers.
  • AWS is building out its control systems to standardize monitoring, alarms and operating tools.

As for sustainability, AWS said that it has been able to cut mechanical energy consumption by 46% with a 35% reduction in carbon used in concrete.

Tech Optimization Data to Decisions AWS reInvent aws amazon Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

AWS re:Invent 2024: Four AWS customer vignettes with Merck, Capital One, Sprinkr, Goldman Sachs

AWS customers are increasingly focused on using cloud management approaches on-premises, optimizing GPU costs and modernizing mainframe infrastructure.

Those were some of the customer takeaways from AWS re:Invent 2024's first day. AWS' news flow starts in earnest on Tuesday so it's worth highlighting a few tales from the buy side today.

Merck on using cloud approaches on-prem

Merck's Jeff Feist, Executive Director, Hosting Solutions, is in charge of the pharma giant's cloud environment and on-premises. Feist said it wants to simplify the hybrid infrastructure and lower total cost of ownership.

Feist also added that the company is focused on transformation with an effort called BlueSky.

"My role has been focusing on the landing zones, developing automated governance controls, making sure that we have a safe, secure and agile environment to leverage the benefits that cloud offers," said Feist. BlueSky includes the following:

  • Roll out infrastructure as code, automated deployments and APIs with software defined configurations.
  • Establish a culture that's agile. "It's probably more important than the technology itself," said Feist. "We need the culture of the company to embrace the model cloud way of working."
  • Training.
  • Focus on delivering business value. The company has modernized more than 2,000 applications with cloud native services. Merck retired more than 1,000 applications.

Going forward, Feist said the company is using Amazon Outposts to focus on cloud models with on-premises environments. Feist said it is adopting a more simple management interface where AWS is responsible for maintenance.

In a nutshell, Feist is looking to make its on-prem infrastructure run like the public cloud setup it has with AWS.

Capital One on optimization and tracking cloud costs

Ed Peters, Vice President, Distinguished Engineer at Capital One, leads an ongoing transformation to create a bank that can use data and insight to disrupt the industry. Capital One has been an AWS customer since 2016.

Capital One has adopted AWS and has a focus on optimizing the infrastructure for costs. "We have a robust FinOps practice," said Peters. "We take the billing data and marry it up with the telemetry tracking and resource tagging information."

Peters said Capital One has saved millions of dollars with optimization. He said:

"We tag everything in our AWS cloud, down to billing units, individual teams, applications. I have a dashboard I can have access to that. I can tell you the monthly spend on any given application. We can drive very, very useful insight into the usage of the cloud, and we can focus our optimization on where it needs to be."

The company is also using Graviton to save money.

Going forward, Capital One is focused on generative AI workloads and building out an infrastructure that can be optimized and automated. Peters said Capital One is in a working group with AMD and Nvidia to optimize GPU workloads.

"We will continue to push forward in generative AI and its application in financial services," said Peters.

Capital One is also focused on transferring more of its operations including financial ledgers and business operations to the cloud.

Sprinklr on benchmarking GPU costs, smaller models

Sprinklr's Jamal Mazhar, Vice President of Engineering at Splinklr, said the company invested in AI early and has been focused on scaling its data ingestion and processing in a cost efficient way.

"We have thousands of servers and petabytes of data," he said.

As a result, Mazhar said Sprinkr has been focused on experimenting with instances that have a good cost ratio for compute and storage. Mazhar said his company has optimized on Graviton and scaling its Elastic Search workloads.

Mazhar said he has also been focused on smaller large language models and cutting GPU overhead. He said:

"A lot of times people use GPUs for AI workloads. But what we found out is that several of our inference models, which are very small in size, there's an overhead of using GPUs. For a smaller models, you can do quite well with compute intensive instances."

Mazhar said Sprinkr has been benchmarking AI workloads for its inference models and AI workloads. He added that the company has seen a 20% to 30% cost reduction. He said:

"When you try use a more expensive chip, you feel like you're going to get better performance. Just benchmarking the workload makes you realize that the GPU is not necessarily overkill. You're not using it properly."

Goldman Sachs: Modernizing mainframes

Victor Balta, Managing Director at Goldman Sachs, said the investment firm was focused on moving its mainframe software, which was licensed from FIS decades ago and heavily customized. FIS has said it won't support mainframes, which supports Goldman Sachs InvestOne platform.

InvestOne is Goldman Sachs investment book of record and sits in Goldman Sachs Asset Management, which oversees $2 trillion in assets. The mainframe architecture was running more than $6 million a year ago support and had limited scaling ability and integration.

Balta said Goldman Sachs created an emulator that would allow its COBOL-based system to run on AWS. Goldman Sachs also decoupled components such as data streaming, real-time integrations and batch processing to reduce costs.

"Currently we have a team of more than 20 global engineers supporting the platform," said Balta. "It's very expensive to run on mainframe with the complexity and integration. You don't have the same number of APIs or data connects to integrate with the mainframe. We're very limited on what we do. And sourcing high skilled COBOL engineers with that financial background is difficult."

Simply put, Goldman Sachs had 30 years of custom COBOL code. Rewriting it wasn't a possibility in a quick time frame so it decided to lift and shift with an emulator and go from there.

Going forward, Goldman Sachs Technology Fellow Yitz Loriner said the company will begin to reinvent its system so it can scale and create a new software development lifecycle.

"The emulator is just the first step because we wanted to reduce the blast radius of changing the infrastructure without changing the existing interface," said Balta. "It's a pragmatic approach."

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Balancing Organizational & Technological Approaches to Trustworthy AI

Don't miss this interview between Manish Goyal of IBM Consulting and Constellation analyst Andy ThuraI.💡 They discuss how IBM is helping clients build robust #AI governance capabilities by addressing organizational and technological aspects. Manish highlights the importance of governance structures like AI ethics boards and unpacks the need for integrated governance programs, continuous compliance monitoring, and advanced tools for visibility and #automation.

🔑 The key is aligning culture, processes, and #technology to unlock AI's potential while mitigating risks and building trust. Watch the full conversation below to learn more👇

On <iframe width="560" height="315" src="https://www.youtube.com/embed/xwHf8Nv4BmY?si=7aGYSiXr4xPUNbzc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Intel CEO Gelsinger out: 5 looming questions ahead

Intel said CEO Pat Gelsinger has retired effective Dec. 1 and will be replaced by two interim co-CEOs as the chipmaker tries to catch up to the AI age.

In a statement, Intel said Gelsinger is out and David Zinsner and Michelle (MJ) Johnston Holthaus will be interim co-CEOs. Intel has begun a search for a permanent CEO. Holthaus has been promoted to be CEO of Intel Products Group, which includes the company's client computing, data center and AI and network and edge units. Zinsner, CFO at Intel, joined the company in 2022 after being CFO at Micron Technology.

It's unclear who will be the new CEO of Intel, but one thing is certain--there will be a lot of work ahead. Here's a look at some of the big questions the new Intel leader will have to resolve.

Can Intel really recover? Intel has largely missed the AI turn and Nvidia has clearly run off with the spoils of the buildout. GPUs are taking over data centers and Intel has been slow to the starting line. In addition, AMD is now Nvidia's GPU competitor and is beating Intel in the data center with its EPYC franchise. In addition, Intel's Foundry unit is losing money at a rapid clip and can't compete with the scale of TSMC. Intel has a multi-year turnaround ahead in a chip industry that has moved to an annual cadence. Perhaps Gelsinger fixed enough to give Intel a shot. We'll see.

Intel has fallen so far that rumors swirled that Qualcomm could pick up the company for pocket change--assuming regulators would ever approve it. 

Will AI inference save the day? The news of Gelsinger's departure comes as AWS re:Invent kicks off and Intel has a bevy of sessions. Some of those sessions revolve around inference workloads. For Intel's CPU-heavy lineup to succeed, AI inference at the edge will have to assume more workloads. In 2025, those edge workloads should become more popular. Intel could become more AI relevant for inference workloads, but Qualcomm, AMD and Nvidia are all in the mix. Nvidia CEO Jensen Huang makes sure he references the company's inference business every quarter. 

Is Intel too important to fail? Intel has secured US government backing and just finalized $7.86 billion in CHIPS Act funding. Intel is getting this funding because it's one of the few players with manufacturing in the US. Intel used to be a US tech champion, but now is limping former giant that's still strategically important. Boeing is in a similar spot. Intel will likely recover to some degree with government backing.

Can Intel's foundry business compete? Intel Foundry has more independence and key partnerships, but at the end of the day it has to compete with TSMC. It's fairly obvious that Intel Foundry is set up as a quasi-independent unit because its capital requirements could bring down the company as a whole.

Will technology buyers come back? Technology buyers--consumer and enterprise--haven't completely dumped Intel, but it's hard to ignore AMD's traction in the data center and Qualcomm's encroachment in the PC market. Back in the day, you couldn't get fired for buying Intel. Today, you might if you're betting on Intel for AI workloads over Nvidia. The idea that Intel is a lock in servers has also faded. ARM architecture is also dominant and Intel has few answers. Nostalgia doesn't count for much in an IT budget.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Oracle Database@AWS hits limited preview

Oracle said that customers can now access Oracle Database@AWS in limited preview. The limited availability landed just a few weeks after Oracle and AWS announced their partnership.

With the limited preview, enterprises can run Oracle Exadata Database Service on Oracle Cloud Infrastructure (OCI) in AWS. Availability starts in the AWS US East Region with an integrated and native experience. Oracle and AWS announced their partnership in September and is likely to have a significant presence at AWS re:Invent this week.

As previously noted, Oracle Database@AWS enables the database giant's customers to migrate Oracle workloads to the cloud with a low latency network connection to AWS applications. Oracle operates and manages the Oracle Exadata Database Service.

Constellation Research analyst Holger Mueller said:

"Oracle and AWS waste no time to make AWS Cloud the third public cloud to give customer the choice to build their Next Generation Applications on AWS with their data being in Oracle Database. The BYOL option may move some Oracle customers from on-premises to the cloud in 2025, faster than expected."

CTO Larry Ellison and AWS CEO Matt Garmin have touted the partnership. For the companies, which have been cloud combatants, tighter integration is a win for many joint customers.

In September, State Street CTO Andy Zitney said the deal will be a win for his company, a big Oracle Exadata and AWS customer. "we were starting down the journey of starting to integrate the clouds, and this comes right at the perfect time to expedite that and make it easier for us," said Zitney. "It will help us accelerate our digital transformation."

Key items about Oracle Database@AWS include:

  • Simplified billing and administration as well as unified customer support.
  • Data connections that provide insights without building data pipelines.
  • Flexible options to migrate.
  • A procurement experience via AWS Marketplace. Customer usage of Oracle Database@AWS qualifies for existing AWS commitments and uses Oracle license benefits.
  • Reference architectures, landing zones and best practices.
  • The ability to unify data across AWS and AWS for generative AI applications.
Data to Decisions Tech Optimization amazon Oracle Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Santa isn't bringing you an AI PC in 2024

That AI PC upgrade cycle, touted most of the year by the tech industry, is being delayed by companies and consumers.

Earnings results from Dell Technologies and HP indicate that a long overdue PC refresh cycle is going to be delayed.

Jeff Clarke, Dell's Chief Operating Officer, said during the company's third quarter earnings call:

"Enterprise demand was promising, though less than expected as we saw some demand push into future quarters. Profitability in the Commercial space held up well sequentially as customers continue to purchase more richly configured devices.

Our Consumer business was weaker-than-expected as demand and profitability remain challenged. The PC refresh cycle is pushing into next year, but has significant tailwinds around an aging install base, AI-driven hardware enhancements like battery life and Windows 10 end-of-life."

Dell CFO Yvonne McGill said it's not a case of if the AI PC refresh cycle will happen, but when. Clarke added that enterprises are holding off on PC purchases because they want futureproof laptops. Why be first in the AI PC upgrade cycle when specs will only improve?

Fortunately for Dell, the vendor can offset any AI PC hiccups with booming AI server sales. HP Inc. is clearly more tethered to the slow-motion PC upgrade cycle. Former sibling HPE has all the AI server momentum.

HP CEO Enrique Lores said commercial demand in its fiscal fourth quarter was solid, but consumers held back.

Here's a look at HP's Personal Systems fourth quarter results.

Lores said the company is betting that genAI features can boost PC sales. "Our expanded AI PC portfolio is now equipped with HP AI Companion, a bespoke application. The app uses generative AI to help analyze private files, create content or respond quickly to key tasks," he said.

HP Boost is a feature that allows data scientists to share GPUs remotely. Lores said that 15% of PC sales in the fourth quarter. HP's Personal Systems unit had revenue growth of 2% in the fourth quarter due to enterprise demand. Lores said:

"We saw continued pressure on commodity cost, which impacted operating profit. And we will continue to take actions on pricing and cost to mitigate this over time. We saw gains in worldwide PC market share year-over-year, particularly in high value categories, including commercial and consumer premium. We believe there is more opportunity here and we will continue to prioritize these categories."

HP remains upbeat about the AI PC upgrade cycle and higher average selling prices. Lores added that in three years, HP's personal systems volume will be 40% to 60% of sales.

"We continue to have an aged installed base that needs to be refreshed, which has been driving the growth that we have seen in Q4," said Lores. "The mix of AI PCs will continue to grow, which also is going to create a tailwind for the business."

Constellation Research analyst Holger Mueller said:

"HP Is practically standing still, keeping its position, potentially even going backwards when adjusting for inflation. PC markets have not recovered, and all eyes will now be on the AI PC carrying a strong Q4 with consumers – or not. It certainly has not spurred an upgrade flurry for enterprise PCs."

There are green shoots for the AI PC cycle. Best Buy said laptop sales grew 7% in the third quarter and consumers are showing interest in upgrading and replacing laptops.

Best Buy's Jason Bonfig, senior executive vice president of customer offerings and fulfillment, said:

"We're excited to what's going to happen in the future with AI. We think it's a phased approach. There'll be new features in AI across all the different platforms. And it's not just Microsoft, it's obviously Apple and Google are there as well. But right now, we do think the biggest thing that's driving is really that upgrade and replacement. And that will probably continue into next year as we think about the end of life support of Windows 10 that happens in October of 2025."

Just another year to go.

Future of Work Tech Optimization Innovation & Product-led Growth Data to Decisions New C-Suite Chief Information Officer Chief Experience Officer