Results

DeepSeek's real legacy: Shifting the AI conversation to returns, value, edge

DeepSeek's real legacy: Shifting the AI conversation to returns, value, edge

The legacy of DeepSeek will have little to do with the engineering and performance of the model. The real impact of DeepSeek will be that it has shifted the AI workload conversation from hardware and GPUs to efficiency, cost for performance and the application layer.

DeepSeek has turned up during earnings conference calls with questions about whether it makes sense to spend so much on AI infrastructure. It's a valid question that's too early to answer.

DeepSeek: What CxOs and enterprises need to know | GenAI prices to tank: Here’s why

Tech giants did their best justifying the AI spend. After all, more efficient models could mean enterprises spend less on infrastructure. Nevertheless, Alphabet will spend $75 billion in 2025 on capital expenditures. Microsoft said it will spend $80 billion on AI data centers. Meta is planning to spend $60 billion to $65 billion on AI in 2025 and end the year with 1.3 million GPUs. Amazon’s capital expenditures, which are on a run rate of $105 million a year, also include distribution centers, supply chain improvements and technology.

The high-level takeaways from tech giants go like this:

  • DeepSeek is evidence that foundational models are commoditizing. "I think one of the obvious lessons of DeepSeekR1 is something that we've been saying for the last two years, which is that the models are commoditizing. Yes, they're getting better across both closed and open, but they're also getting more similar and the price of inference is dropping like a rock," said Palantir CTO Shyam Sankar.
  • That commoditization doesn't mean that there isn't a need for more infrastructure--at least initially.
  • Hyperscalers are watching cheaper models and how they combine with their custom silicon. It's unclear what model commoditization and a focus away from training does to Nvidia.
  • Cheaper models are moving value toward applications and use cases. AI usage will surge as will edge computing use cases. Enterprises will have a much easier time infusing applications with AI.

The reality is that DeepSeek is an advance and shifted the conversation to optimization and LLM pricing, but the model needs some work relative to other options. We saw DeepSeek put through a rubric on AWS Bedrock and AWS SageMaker compared to other models and the performance was a bit spotty. There were times DeepSeek went into a never-ending loop. DeepSeek may be good enough for some use cases, but in many areas it was meh. Nevertheless, DeepSeek plays into the AWS strategy to offer multiple models.

Sign up for Constellation Insights newsletter

Amazon CEO Andy Jassy said on the company’s fourth quarter earnings call that the AWS launch of Amazon Nova at re:Invent, DeepSeek and LLM choices give enterprises “a plethora of new models and features in Amazon Bedrock that give customers flexibility and cost savings.”

In the end, DeepSeek has been a great way to pivot the conversation on cheaper AI with a dash of a China vs. US AI war.

Jassy continued:

“We were impressed with what DeepSeek has done with some of the training techniques, primarily in flipping the sequencing of reinforcement training, reinforcement learning being earlier and without the human the loop. We thought that was interesting, ahead of the supervised fine tuning. We also thought some of the inference optimizations they did were also quite interesting. Virtually all the big generative AI apps are going to use multiple model types. Different customers going to use different models for different types of workloads. The cost of inference will substantially come down.”

Alphabet CEO Sundar Pichai said AI workloads and the foundational models underneath them will have to adhere to the Pareto frontier, which is a set of optimal solutions that balance multiple objectives of a complex system.

With Google Cloud now capacity constrained due to AI demand, Alphabet has no choice but to spend heavily on infrastructure. Microsoft took the plunge and noted that it can meet future AI demand due to its data center additions.

Nevertheless, it's worth highlighting what Pichai had to say. In a nutshell, scaling AI infrastructure and the commoditization of large language models aren't completely in conflict.

Pichai credited DeepSeek for its advances and said Gemini ranks well on price, performance and latency and all three matters for use cases. He added:

"You can drive a lot of efficiency to serve these models really well. I think a lot of it is our strength of the full stack development into an optimization, our obsession with cost per query, all of that, I think, sets us up well for the workloads ahead, both to serve billions of users across our products and on the cloud side.

If you look at the trajectory over the past 3 years, the proportion of the spend towards inference compared to training has been increasing, which is good because obviously inference is to support businesses with good ROIC. The reasoning models, if anything, accelerates that trend because it's obviously scaling upon inference dimension as well.

The reason we are so excited about the AI opportunity is we know we can drive extraordinary use cases because the cost of actually using it is going to keep coming down, which will make more use cases feasible. And that's the opportunity space. It's as big as it comes. And that's why you're seeing us invest to meet that moment."

Microsoft CEO Satya Nadella had a similar take. He said on Microsoft’s earnings call:

"What's happening with AI is no different than what was happening with the regular compute cycle. It's always about bending the curve and then putting more points up the curve. There are the AI scaling laws, both the pre-training and the inference time compute that compound and that's all software."

Nadella said DeepSeek is one data point that models are being commoditized and broadly used. Software customers will benefit. Given Nadella has Microsoft Azure, DeepSeek is just fine to him.

Meta CEO Mark Zuckerberg said it's too early to know what DeepSeek means for infrastructure spending. "There are a bunch of trends that are happening here all at once. There's already sort of a debate around how much of the compute infrastructure that we're using is going to go towards pretraining versus as you get more of these reasoning time models or reasoning models where you get more of the intelligence by putting more of the compute into inference, whether just will shift how we use our compute infrastructure towards that," said Zuckerberg.

Just because models become less expensive doesn't mean the demand for compute changes, said Zuckerberg. "One of the new properties that's emerged is the ability to apply more compute at inference time in order to generate a higher level of intelligence and a higher quality of service," said Zuckerberg. "I continue to think that investing very heavily in CapEx and infra is going to be a strategic advantage over time. It's possible that we'll learn otherwise at some point, but I just think it's way too early to call that."

And about that edge computing hook…

The DeepSeek-inference-lower cost AI discussion has also highlighted how edge devices--PCs, smartphones, Project Digits and more--are going to be a larger part of the AI inference mix. Here's what Arm CEO Rene Haas said on the company's third quarter earnings call:

"DeepSeek is great for the industry, because it drives efficiency, it lowers the cost. It expands the demand for overall compute. When you think about the application to Arm, given the fact that AI workloads will need to run everywhere and lower-cost inference, a more efficient inference makes it easier to run these applications in areas where power is constrained. As wonderful a product as Grace Blackwell is, you'd never be able to put it in a cell phone, you'd never be able to put it into earbuds, you can't even put it into a car. But Arm is in all those places. I think when you drive down the overall cost of inference, it's great."

Haas also added that the industry will still need some serious compute so the AI buildout will continue. "We're nowhere near the capabilities that could be transformational in terms of what AI can do," he said.

Qualcomm CEO Cristiano Amon said DeepSeek illustrates how AI will play into edge use cases. Amon said:

"We also remain very optimistic about the growing edge AI opportunity across our business, particularly as we see the next cycle of AI innovation and scale. DeepSeek-R1 and other similar models recently demonstrated the AI models are developing faster, becoming smaller, more capable and efficient, and now able to run directly on device. In fact, DeepSeek-R1 distilled models were running on Snapdragon powered smartphones and PCs within just a few days of its release.

As we entered the era of AI inference, we expect that while training will continue in the cloud, inference will run increasingly on-device, making AI more accessible, customizable, and efficient. This will encourage the development of more targeted, purpose-oriented models and applications."

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS revenue up 19% in Q4, Amazon results shine

AWS revenue up 19% in Q4, Amazon results shine

Amazon Web Services revenue growth checked in at 19% in the fourth quarter as parent Amazon handily topped estimates. Amazon's outlook, however, was mixed.

AWS reported operating income of $10.6 billion in the fourth quarter on revenue of $28.8 billion as the annual run rate topped $115 million. AWS in the third quarter also grew at a 19% clip. AWS in the fourth quarter had re:Invent where it outlined its AI strategy. 

Rvals Microsoft Azure and Google Cloud showed revenue growth just above 30% but are working off of a lower base. Microsoft Azure and Google Cloud growth also decelerated sequentially. Amazon CEO Andy Jassy said:

“When we look back on this quarter several years from now, I suspect what we’ll most remember is the remarkable innovation delivered across all of our businesses, none more so than in AWS where we introduced our new Trainium2 AI chip, our own foundation models in Amazon Nova, a plethora of new models and features in Amazon Bedrock that give customers flexibility and cost savings, liberating transformations in Amazon Q to migrate from old platforms, and the next edition of Amazon SageMaker to pull data, analytics, and AI together more concertedly.”

Hyperscale results:

Amazon reported fourth quarter earnings of $20 billion, or $1.86 a share, on revenue of $187.8 billion, up 10% from a year ago. Wall Street was expecting Amazon to report fourth quarter earnings of $1.48 a share on revenue of $187.23 billion.

Here's the breakdown by unit:

  • Amazon North America operating income in the fourth quarter was $9.3 billion on revenue of $115.6 billion, up 10% from a year ago.
  • Amazon international reported operating income of $1.3 billion on revenue of $43.4 billion, up 9%.
  • AWS delivered the most operating income for the company.

For 2024, Amazon reported net income of $59.2 billion, or $5.53 a share, on revenue of $638 billion, up 11%. AWS reported operating income of $39.8 billion on revenue of $107.6 billion.

As for the outlook, Amazon projected first quarter revenue of $151 billion and $155.5 billion, up 5% to 9% from a year ago. Amazon said it will take about a $2.1 billion hit from foreign exchange rates. Operating income will be between $14 billion and $18 billion in the first quarter.

One of the big questions was how much AWS would spend on building it AI infrastructure. It’s also worth noting that Amazon’s capital expenditures, which are on a run rate of more than $105 million a year and $26.3 billion in the fourth quarter, also include distribution centers, supply chain improvements and technology. Hyperscale cloud players’ AI buildout was questioned considering DeepSeek, which was a lower cost model from China. Amazon spent $23.6 billion on technology and infrastructure in the fourth quarter. 

DeepSeek: What CxOs and enterprises need to know | GenAI prices to tank: Here’s why

Nevertheless, cloud providers said they’ll keep spending heavily on AI infrastructure. Alphabet will spend $75 billion in 2025 on capital expenditures. Microsoft said it will spend $80 billion on AI data centers. Meta is planning to spend $60 billion to $65 billion on AI in 2025 and end the year with 1.3 million GPUs. Project Stargate will spend $500 million on US AI infrastructure.

Here's what Jassy had to say on the call:

  • Jassy said the AWS build out needs to continue. "We could be growing faster if not for some of the constraints on capacity," he said. "A lot of that comes from power constraints."
  • "We were impressed with what DeepSeek has done with some of the training techniques, primarily in flipping the sequencing of reinforcement training, reinforcement learning being earlier and without the human the loop. We thought that was interesting, ahead of the supervised fine tuning. We also thought some of the inference optimizations they did were also quite interesting."
  • "Virtually all the big generative AI apps are going to use multiple model types and different customers going to use different models for different types of workloads. You're going to see us provide as many leading frontier models as possible for customers to choose from."
  • "Sometimes people make the assumptions that if you're able to decrease the cost of any type of technology component that it's going to lead to less total spend in technology. In this case, we're really talking about inference. But we've never seen that to be the case." 
  • "The cost of inference will substantially come down. I think it will make it much easier for companies to be able to infuse all their applications with inference and with generative AI."
  • Amazon saw a $700 million headwind from foreign exchange rates, more than anticipated. 
  • Third party sellers were 61% of items sold in 2024.
  • Same day delivery serves 140 metro areas. 
  • "We also remain squarely focused on costs to serve in our fulfillment network, which has been a meaningful driver of our increased operating income. We talked about the regionalization of our US network. We've also recently rolled out our redesigned us inbound network, while still in its early stages, our inbound efforts have improved our placement of inventory so that even more items are closer to end customers."
  • "We've reduced our global cost to serve on a per unit basis for the second year in a row, while at the same time increasing speed, improving safety and adding selection. We see opportunity to reduce costs again, as we further refine inventory placement, grow our same day delivery network, accelerate robotics and automation throughout the network."
  • Amazon ad revenue was $17.3 billion in the fourth quarter, up 18% from a year ago. 
  • AWS will be lumpy over the next few years due to enterprise adoption cycles, capacity and advances, "it's hard to overstate how optimistic we are about what lies ahead for AWS customers and business."
  • Enterprises including Databricks, Adobe and Qualcomm are testing Trainium2 now. Trainium3 will be in preview in late 2025.
  • Thousands of AWS customers are using Amazon Nova models including Palantir, SAP, Fortinet and Robinhood. 
  • "While AI continues to be a compelling new driver in the business, we haven't lost our focus on core modernization of companies' technology infrastructure from on-premises to the cloud."

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon Big Data SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Qualcomm, Arm cheer cheaper models, AI inference at the edge

Qualcomm, Arm cheer cheaper models, AI inference at the edge

CEOs from Qualcomm and Arm say that AI inferencing will increasingly happen at the edge in multiple devices as large language models become more efficient and need less compute.

The buzz around DeepSeek's models and the ensuing discussion about how much AI infrastructure is necessary has given edge computing--where AI inference is likely to happen--more play.

Keep in mind that Qualcomm and Arm have a vested interest in this edge AI game, but the comments from the companies are notable.

The DeepSeek-inference-lower cost AI discussion has also highlighted how edge devices--PCs, smartphones, Project Digits and more--are going to be a larger part of the AI inference mix. Here's what Arm CEO Rene Haas said on the company's third quarter earnings call:

"DeepSeek is great for the industry, because it drives efficiency, it lowers the cost. It expands the demand for overall compute. When you think about the application to Arm, given the fact that AI workloads will need to run everywhere and lower-cost inference, a more efficient inference makes it easier to run these applications in areas where power is constrained. As wonderful a product as Grace Blackwell is, you'd never be able to put it in a cell phone, you'd never be able to put it into earbuds, you can't even put it into a car. But Arm is in all those places. I think when you drive down the overall cost of inference, it's great."

DeepSeek: What CxOs and enterprises need to know | GenAI prices to tank: Here’s why

Haas also added that the industry will still need some serious compute so the AI buildout will continue. "We're nowhere near the capabilities that could be transformational in terms of what AI can do," he said.

Qualcomm CEO Cristiano Amon said DeepSeek illustrates how AI will play into edge use cases. Amon said:

"We also remain very optimistic about the growing edge AI opportunity across our business, particularly as we see the next cycle of AI innovation and scale. DeepSeek-R1 and other similar models recently demonstrated the AI models are developing faster, becoming smaller, more capable and efficient, and now able to run directly on device. In fact, DeepSeek-R1 distilled models were running on Snapdragon powered smartphones and PCs within just a few days of its release.

As we entered the era of AI inference, we expect that while training will continue in the cloud, inference will run increasingly on-device, making AI more accessible, customizable, and efficient. This will encourage the development of more targeted, purpose-oriented models and applications."

The visions of Qualcomm and Arm are very similar when it comes to AI at the edge. Both companies are in data centers, smartphones, PCs, Internet of things endpoints and multiple edge devices such as automobiles. Qualcomm designs processors and markets its dominant Snapdragon platform. Arm licenses its designs to the industry.

Qualcomm's first quarter

Qualcomm's IoT business, which includes PCs, tables, edge networking and extended reality devices, grew at a rapid clip in the first quarter and generated revenue of $1.55 billion, up 36% from a year ago. That business is dwarfed by Qualcomm's handset business, but headed in the right direction.

The company reported strong first quarter results with earnings of $3.18 billion, or $2.83 a share, on revenue of $11.67 billion, up 17% from a year ago. Non-GAAP earnings were $3.41 a share, well ahead of Wall Street estimates.

Amon noted the company is diversifying its business and expanding into industrial IoT, auto and PCs. Qualcomm projected second quarter revenue between $10.3 billion to $11.2 billion with non-GAAP earnings of $2.70 to $2.90 per share.

Qualcomm said it expects strong growth in PCs along with more enterprise traction, auto and industrial IoT. More: AI PCs may decentralize inferencing workloads | Physical AI, world foundation models will move to forefront

Constellation Research analyst Holger Mueller said:

"All Qualcomm segments have been growing nicely. Qualcomm will have to keep executing in the same direction for the quarter, with concerns about its licensing business showing pedestrian growth, close to inflation. Investors care about this revenue stream as it highly profitable for Qualcomm. Credit goes to CEO Amon to have transform Qualcomm into a high tech manufacturer."

Arm's third quarter

Arm reported third quarter net income of $252 million, or 24 cents a share, on revenue of $983 million, up 19% a share. Non-GAAP earnings were 39 cents a share. The results were better than expected.

The outlook from Arm was in line with expectations.

Arm projected fourth quarter revenue between $1.17 billion and $1.27 billion with non-GAAP earnings between 48 cents a share to 56 cents a share. For fiscal 2025, Arm is projecting revenue of $3.94 billion to $4.04 billion with non-GAAP earnings of $1.56 a share to $1.64 a share.

On a conference call, Haas touted Arm's role in Project Stargate and Nvidia's various efforts.

"We strongly believe that the advances in AI, both for training and inference, are going to increase the demand for compute in the AI Cloud," said Haas. "We expect Arm solutions to address the needs from the cloud to the edge to power growth in the world's most popular compute ecosystem for decades to come."

Mueller said:

"All Qualcomm segments have been growing nicely. Qualcomm will have to keep executing in the same direction for the quarter, with concerns about its licensing business showing pedestrian growth, close to inflation. Investors care about this revenue stream as it highly profitable for Qualcomm. Credit goes to CEO Amon to have transform Qualcomm into a high tech manufacturer."

Data to Decisions Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Future of Work Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Project Stargate, Rental Market AI, Conversational Intelligence | ConstellationTV Episode 97

Project Stargate, Rental Market AI, Conversational Intelligence | ConstellationTV Episode 97

📺 ConstellationTV ep. 97 is here! Co-hosts Liz Miller and Holger Mueller give a #technology news roundup, covering "Project Stargate" announcement, analysis of ServiceNow acquisitions, and the role of #AI in content management and customer service.

Next, Larry Dignan interviews Marcus Räder, CEO of Hostaway, about rental property management software, the unique challenges of the SMB vacation rental market, and the need for AI-powered efficiency.

Finally, Liz highlights her new ShortList, Conversational Intelligence and AI-powered Customer Service Solutions, which will be released Q1 2025 and highlights leading vendors in the CI space.

00:00 - Meet the Hosts
01:19 - #Enterprise Tech News
17:50 - Interview with Marcus Rader, CEO of Hostaway
31:54 - ShortList Highlight
38:27 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/jMwJUPfapyc?si=NaafNFEPrj1FbpR-" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Uber outlines its autonomous vehicle plan

Uber outlines its autonomous vehicle plan

Uber addressed long-running concerns about its autonomous vehicle strategy with a plan that revolves around partnering and leveraging its data platform to manage workloads and rides.

However, Uber noted that autonomous vehicles (AVs) needed a lot of things to go right to scale adequately. The biggest hurdle is costs. AV rides need to drop below what it costs a human to shuttle you place to place.

Speaking on the company's fourth quarter earnings conference call, Uber CEO Dara Khosrowshahi said AV costs including hardware such as vehicle and sensor kit, operating costs with compute, storage and maintenance and fleet management cost more than $2 million a mile. Those costs don't include demand gen, marketing, payments and cost to serve. Humans rides are about $2 a mile.

Khosrowshahi said 2024 was a turning point for AV technology as Waymo, WeRide, Pony and Baidu made self-serving rides to the public. Other players are likely to follow. The big question for Uber was whether it would get disrupted. Khosrowshahi outlined many threads worth considering:

"Even as we see AV technology advancing, we expect AV commercialization will take significantly longer. Several pieces of the go-to-market puzzle still need to come together, including: a consistently super-human safety record; enabling regulations; a cost-effective, scaled hardware platform; excellent on-the-ground operations; and a high-utilization network that can manage variable demand with flexible supply.

Every one of these five pieces must work in concert, or the puzzle falls apart. For example, even the lowest-cost AV fleet will struggle to generate revenue if its vehicles are not highly utilized. And even a well-utilized but fixed fleet will struggle to meet consumer demand at peak times."

Uber obviously sees its role in at the application and data layer of that AV stack. The plan is simple and on brand: Leverage its data platform and AI to manage rides, supply and demand and continually optimize.

Khosrowshahi made the following points about the moving AV parts.

  • Safety will have to surpass what humans can do for mass adoption. He said Waymo is a safety leader with its transparency, but multiple tech platforms and approaches will create more risks.
  • Regulations are fragmented across states and a national framework in the US for AV testing and deployment would be welcome. International markets such as Abu Dhabi may move faster.
  • The software stack from auto OEMs has been slow. Without OEMs, it will be hard to scale AVs. AV companies have retrofitted traditional vehicles with sensors that mean costs of more than $200,000 per vehicle. Costs need to come down. It is likely that all new vehicles will be sold with L4-capable software and that will increase supply.
  • Fleet management and operations. Uber said an average AV can run as much as 100,000 a year. That scale means charging and service costs will increase. Uber sees a role in managing cleaning, parking and operational issues such as fare disputes, stranded vehicle rescue and insurance claim resolutions.
  • Utilization. If all of the pieces of the AV puzzle fall into place, AV operators will need to manage supply and demand. Ride demand patterns fluctuate and AV vendors will struggle meeting weekly peaks and downtime.

"Given the scale of the Uber platform, and human drivers’ ability to dynamically fulfill demand spikes—and take a break during demand troughs—partnering with Uber allows AV players to move much faster than they could on their own. This fact gives us confidence that the Uber network, with a hybrid of AV and human drivers, will deliver the highest asset utilization and revenue generation opportunity for our partners," said Khosrowshahi. "We are spending an enormous, yet appropriate, amount of organizational energy to execute on our AV strategy. We will have much more to share through the year, starting with our public launch with Waymo in Austin next month and Atlanta this summer."

In other words, Uber's plan is to plug into those AV networks just like it does for human drivers and then adding support, technology and payments to the stack.

While Uber's AV chat was the most notable, the company's fourth quarter results were solid. The company reported fourth quarter earnings of $6.9 billion, including a tax valuation release and unrealized gains from investments, on revenue of $12 billion, up 20% from a year ago.

Gross bookings in the fourth quarter were up 18% to $44.2 billion. Uber delivered 3.1 billion trips in the fourth quarter, or 33 million trips per day.

For the first quarter, Uber projected gross bookings growth of 17% to 21%.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

SAP launches SAP ERP, private edition, transition option: What you need to know

SAP launches SAP ERP, private edition, transition option: What you need to know

SAP made it official and granted SAP customers an option to extend the move to SAP Cloud from on-premise ERP by three years. But there are a few wrinkles you need to know.

The company, which outlined the extension on its fourth quarter earnings call, said the formal SAP ERP, private edition, transition option rollout will come in the second quarter.

For SAP, the offering is designed to help large complex enterprises that weren't going to meet a 2030 deadline. Key points about SAP ERP, private edition, transition option include:

  • The offering is a new cloud subscription that revolves around SAP ECC and a set of services focused on moving to SAP Cloud ERP.
  • SAP ERP, private edition, transition option will support enterprises with patches for security, legal and software issues.The subscription will be available in 2028 and be active to use from 2031 to 2033.
  • SAP ERP, private edition, transition option is an additional subscription. Customers that can complete the move to SAP Cloud by the end of 2030 won't need it.
  • While SAP ERP, private edition, transition option is centered on SAP ECC it won't include all of the features in SAP Business Suite 7, which will be available until the end of 2030.
  • SAP HANA is the only supported database for SAP ERP, private edition, transition option. Older versions of Java and third-party technology aren't supported.
  • SAP said SAP ERP, private edition, transition option is not an extension of maintenance. There aren't changes for customers running on-premises SAP ERP systems after 2030.

Got all of that? Now you know why SAP is disclosing SAP ERP, private edition, transition option early. There are a lot of moving parts and SAP said more details will roll out closer to the 2028 availability.

Related research:

Constellation Research analyst Holger Mueller said the extension doesn't mean SAP customers can coast. Mueller said:

"SAP did not manage to keep the hard deadline of 2027, and is now extending the offering to its customer by three years – but with a catch – all customers join the RISE program. With this move, SAP now can show higher adoption numbers for the years to come.

For customers it is good as they gain more time. But they cannot sit and let the year go by – as they have in the past – but must move to HANA, clean up the tech stack, and make sure they are licensed.

Even with three more years available – it is time for SAP customer CxOs to get going with its migration towards S/4HANA – probably best for the public cloud edition. The big variable for 2025 is going to be success and adoption of SAP Business AI. If SAP creates enough value, customers will be self-motivated to move to S/4HANA as soon and as fast as they can. So, 2025 is a critical year for the SAP customers, SAP and the SAP ecosystem. All eyes on the value and uptake of the AI offerings (that are gated to the public cloud)."

Indeed, SAP is preparing to launch a big agentic AI push as part of its SAP Business AI strategy. By launching new innovation, SAP is hoping to woo customers still holding out. SAP said it is positioning Joule as the new UI for its software.

SAP's German speaking user group, DSAG, commented on SAP ERP, private edition, transition option when reports surfaced last month. DSAG said it generally welcomed the program and the extension, said SAP was obviously trying to convert customers to RISE and still forcing a switch to the cloud.

DSAG: SAP's innovation focus on cloud, discriminates against on-premise users

A translation from a German statement in English indicated DSAG wanted more flexibility in the program, but was constructive on the move.

DSAG is likely to have more comment when it reviews the additional details.

Data to Decisions Future of Work Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization SAP Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

AMD data center business continues to shine in Q4

AMD data center business continues to shine in Q4

AMD reported better-than-expected fourth quarter results as its data center revenue was $3.9 billion, up 69% from a year ago doing to its GPU and server chip demand.

The chipmaker reported  fourth quarter earnings of $482 million, or 29 cents a share, on revenue of $7.66 billion, up 24% from a year ago. Non-GAAP earnings for the quarter were $1.09 a share.

Wall Street was expecting AMD to report fourth quarter non-GAAP earnings of $1.09 a share on revenue of $7.53 billion.

For the year, AMD reported earnings of $1.64 billion, or $1 a share, on revenue of $25.78 billion.

As for the outlook, AMD projected first quarter revenue of $7.1 billion, give or take $300 million. At the midpoint, AMD is projecting revenue growth of about 30% from a year ago.

AMD CEO Lisa Su said 2024 was transformative and the data center business is strong. "Data Center segment annual revenue nearly doubled as EPYC processor adoption accelerated and we delivered more than $5 billion of AMD Instinct accelerator revenue. Looking into 2025, we see clear opportunities for continued growth," said Su.

"AMD is growing in its transition phase, where datacenter is the star," said Constellation Research analyst Holger Mueller. "Overall the numbers work for Lisa Su and team, but investors expect AI turbocharged growth coming from the data center segment. AMD needs to pull off a few EPYC (pun intended) moves in 2025 before investors will be really happy."

By the numbers:

  • AMD reported fourth quarter data center revenue of $3.9 billion due to "the strong ramp of AMD Instinct GPU shipments and growth in AMD EPYC CPU sales." For 2024, AMD reported data center revenue of $12.6 billion, up 94%. The data center unit drove AMD's operating income gains for the fourth quarter and year.
  • Client revenue was $2.3 billion, up 58% due to strong demand for Ryzen processors. For 2024, AMD's PC chip business revenue was $7.1 billion, up 52% from a year ago.
  • Gaming revenue was $563 million, down 59% from a year ago. Annual revenue for gaming was $2.6 billion.
  • Embedded revenue was down 13% from a year ago in the fourth quarter at $923 million. For 2024, embedded revenue was $3.6 billion, down 33% from a year ago.

Data to Decisions Tech Optimization AMD Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Google Cloud revenue up 30% in Q4, Alphabet results mixed

Google Cloud revenue up 30% in Q4, Alphabet results mixed

Google Cloud revenue in the fourth quarter were $12 billion, up 30% from a year ago, amid mixed results from parent Alphabet, which said it will spend $75 billion in capital expenditures in 2025. The company also said its Google Cloud business was capacity constrained.

The company reported net income of $26.54 billion, or $2.15 a share, on revenue of $96.47 billion, up 12%. Wall Street was expecting Alphabet to report fourth quarter earnings of $2.13 a share on revenue of $96.67 billion.

For 2025, Alphabet reported earnings of $100.1 billion, or $8.04 a share, on revenue of $350.02 billion, up 14%.

Sundar Pichai, CEO of Alphabet, said:

"We are building, testing, and launching products and models faster than ever, and making significant progress in compute and driving efficiencies. In Search, advances like AI Overviews and Circle to Search are increasing user engagement. Our AI-powered Google Cloud portfolio is seeing stronger customer demand."

By the numbers:

  • Google Cloud revenue in the fourth quarter was $11.95 billion, up 30% from $9.19 billion a year ago. Microsoft Azure put up quarterly revenue growth of 31%. Operating income for Google Cloud was $2.09 billion, up from $864 million in the fourth quarter a year ago.
  • Google search revenue in the fourth quarter were $54.03 billion.
  • Google advertising revenue was $72.46 billion.
  • Google services operating income was $32.84 billion and that unit includes the Gemini team.
  • Alphabet ended the quarter with 183,323 employees.

Anat Ashkenazi, Alphabet CFO, said: 

"We do see and have been seeing very strong demand for AI products in the fourth quarter in 2024. We exited the year with more demand than we had available capacity. So we are in a tight supply demand situation, working very hard to bring more capacity online. As I mentioned, we've increased investment in CapEx in 2024, continue to increase in 2025, and will bring more capacity throughout the year."

Pichai said the following on the company's earnings call:

  • "Cloud customers consume more than eight times the compute capacity for training and inferencing compared to 18 months ago."
  • "We are working on even better thinking models and look forward to sharing those with the developer community soon. We're also excited by the progress of our video and image generation models."
  • 4.4 million developers are using Gemini models, double from six months ago.
  • AI overviews are now available in more than 100 countries. Circle to Search is now available on over 200 million Android devices in cloud and YouTube.
  • "Those who have tried Circle to Search before now use it to start more than 10% of their searches."
  • The number of first-time commitments to Google Cloud doubled in 2024 and the company is closing more strategic deals over $1 billion and doubled the number of $250 million deals.
  • Vertex AI saw a 5x increase in customers in the fourth quarter compared to a year ago.
  • Waymo is averaging more than 150,000 trips each week.

Anat Ashkenazi, Alphabet CFO, said: 

"We do see and have been seeing very strong demand for AI products in the fourth quarter in 2024. We exited the year with more demand than we had available capacity. So we are in a tight supply demand situation, working very hard to bring more capacity online. As I mentioned, we've increased investment in CapEx in 2024, continue to increase in 2025, and will bring more capacity throughout the year."

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"Alphabet narrowly missed revenue targets. The surprise though is that the contribution of the advertisement business did better than the its cloud business, not the expectation investors had for 2024 results. The good news is that the Alphabet core business is proving itself in the AI era – even before Google has started to infuse advertisement into its Gemini offering.

The argument can be made that the hopes of its advertisement / search business competitors (Microsoft) are not materializing, which is good for Alphabet. Sundar Pichai and team know this and are doubling down on investment – with $75 billion committed to capital expenditures in 2025, a record commitment. And despite all the investment and raising EPS earnings by almost 40%, the question for Q1 will be: How can Thomas Kurian and team get the Google Cloud growth back to expectations? Google Cloud capacity limitations were cited as core reason for the slowdown. The first half will be critical to see how well Google Cloud can help carry Alphabet into high teens / maybe even low twenties revenue growth."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Dynatrace moves to bridge AIOps, observability, preventative operations

Dynatrace moves to bridge AIOps, observability, preventative operations

Dynatrace updated its platform as it aimed to expand AIOps into preventive operations, add to its security roster and enhance developer workflows.

The launches were outlined at Dynatrace's Perform conference in Las Vegas. Steve Tack, Chief Product Officer, said observability is set up to expand to multiple teams as AI and cloud native workloads converge along with logs and security.

"While cloud modernization is continuous and we're capturing that, it's really everything else that surrounds that, whether it's the different roles that are engaged, spanning across development teams, SREs, platform engineering and more to the types of projects and the end to end, observability and security they're looking to achieve, whether it be for new AI native workloads or more," said Tack. "Given the dynamism of the cloud, the scale, the hyper scale, or workloads, it's really changing the way teams are leveraging observability and the way they're driving it forward."

Here's a look at the announcements from Perform:

Dynatrace expanded its AIOps reach into preventive operations with enhancements to its Davis AI engine. Davis AI will be expanded into operations with the goal to prevent outages and drive returns.

Bernd Greifeneder, Dynatrace CTO and Founder, said Dynatrace with its stack has been able to combine observability, security and business level applications.

Greifeneder said Dynatrace's platform will leverage AI to combine automated root cause analysis, an automation engine and abilities to remediate automatically.

According to Dynatrace, Davis AI will get the ability to provide root cause analysis and automate remediation workflows. Davis AI will also get natural language explanations with context.

Constellation Research analyst Andy Thurai said:

"Natural language processing (NLP) interface has been a gamechanger for AIOps, enabling any incident responder to converse with observability data and try to get to the root cause of the incident, versus waiting for an experienced observability practitioner who fully understands the system to step in. When a large language model (LLM) is trained with ITOps-specific data and enhanced with enterprise-specific observability telemetry, the GenAI immediately understands the telemetry data that is fed into it and leaps into action without needing to wake up someone with the tribal knowledge for help."

Dynatrace launched Continuous Security with Cloud Security Posture Management (CSPM) for enterprises managing multi-cloud and hybrid cloud environments.

The company said Dynatrace CPSM extends existing Kubernetes Security Posture Management (KSPM) capabilities so enterprises can manage security via one dashboard. Dynatrace also enhanced its security investigator with multiple angles of analysis as well as attack vectors.

In the big picture, Dynatrace is aiming to use CPSM to provide continuous compliance and auditing. Greifeneder said Dynatrace will be able to automate about 80% of the compliance tasks.

Dynatrace launched tools to give cloud application development teams more insights.

The company added new dashboards with advanced log, metrics and trace analytics via Davis AI. The new tools will make it easy to optimize apps, monitor health and analyze interactions.

Dynatrace also launched Live Debugger, which extracts debugging information without performance impact. The company also added self-service tools for enterprise developers.

Dynatrace's outlook

The company recently reported its fiscal third quarter results with earnings of $1.19 a share on revenue of $436 million, up 19% from a year ago. Non-GAAP earnings were 37 cents a share.

Dynatrace also upped its outlook for the fourth quarter and projected revenue of $432 million to $437 million, up 13% to 15% with non-GAAP earnings of 29 cents a share to 31 cents a share.

CEO Rick McConnell said on an earnings conference call that the company is expanding the use cases for observability.

"Our conviction in the observability market continues to strengthen," said McConnell, who added that cloud adoption and AI are making observability a must have. The problem is that there are dozens of observability tools and enterprises face sprawl.

"We believe that an AI-powered observability platform with sophisticated analytic and automation capabilities is vital in providing the visibility needed for software to work perfectly," said McConnell.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Quantinuum launches generative AI quantum framework, sees quantum computing as synthetic data generator

Quantinuum launches generative AI quantum framework, sees quantum computing as synthetic data generator

Quantinuum launched its Generative Quantum AI framework that aims to combine AI, quantum computing and supercomputers to address problems classical computing can't solve.

The launch of "Gen QAI" is designed to move quantum computing use cases toward business needs today. Many of these approaches involve hybrid strategies that blend quantum computing and supercomputing.

In January, these quantum computing use cases (and the pure play stocks) came under scrutiny after Nvidia CEO Jensen Huang said corporate value was 15 to 30 years away.

According to Quantinuum, Gen QAI will leverage unique data generated by quantum computing to enable commercial applications ranging from medicine discovery, financial markets and supply chain optimization.

Specifically, Quantinuum's H2 quantum computer will generate data that can be used to train AI systems to enhance models. In other words, quantum computing could become the real AI factory.

In mid-2025, Quantinuum is on schedule to launch its Helios system, a more advanced quantum computer that can expand use cases.

Dr. Raj Hazra, President and CEO of Quantinuum, said:

"We are at one of those moments where the hypothetical is becoming real and the breakthroughs made possible by the precision of this quantum-generated data will create transformative commercial value across countless sectors."

Merck KGaA was cited as a customer using Quantinuum's framework to generate synthetic data.

From here, Quantinuum said it will work with industry partners to expand use cases beyond GPUs. The company said it is working with HPE to utilize quantum computing in the automotive sector.

More:

 

 

 

Data to Decisions Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Quantum Computing AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer