Results

Microsoft Q2: Azure revenue growth of 31%, AI revenue run rate of $13 billion

Microsoft reported strong second quarter results with revenue growth of 12%, Azure revenue growth of 31% and an AI business annual revenue run rate of $13 billion.

The company reported fiscal second quarter earnings of $24.1 billion, or $3.23 a share, on revenue of $69.6 billion. Wall Street was looking for earnings of $3.11 a share on revenue of $68.78 billion.

Intelligent cloud second quarter revenue was $25.5 billion, up 19% from a year ago. Productivity and business process revenue was $39.4 billion, up 14% from a year ago. Microsoft Cloud revenue was $40.9 billion, up 21% from a year ago.

In a statement, Microsoft CEO Satya Nadella said "we are innovating across our tech stack and helping customers unlock the full ROI of AI."

Nadella addressed multiple topics on the earnings call. Here's a look:

  • He said Microsoft is allocating capital to AI compute as it is seeing "significant efficiency gains in both training and inference for years now." Nadella said: "On inference, we have typically seen more than 2x price performance gain for every hardware generation and more than 10x for every model generation due to software optimization."
  • These efficiency gains for AI workloads will leave to more demand. 
  • "We have more than doubled our overall data center capacity in the last three years, and we have added more capacity last year than any other year in our history. Our data centers, networks, racks and silicon are all coming together as a complete system to drive new efficiencies to power both the cloud workloads of today and the next generation AI workloads."
  • Fabric is Microsoft fastest growing analytics product in the company's history and PowerBI has more than 30 million monthly active users, up 40% from a year ago. 
  • "We are seeing accelerated customer adoption across all deal sizes as we win new Microsoft 365 Copilot customers, and see the majority of existing enterprise customers come back to purchase more seats. When you look at customers who purchase copilot during the first quarter of availability, they have expanded their seat collectively by more than 10x over the past 18 months."
  • 160,000 organizations have used Copilot Studio to collectively create more than 400,000 custom agents in 3 months.

Amy Hood, CFO of Microsoft, added that the company will continue to balance operational discipline with investments in AI and cloud. As for the outlook, Hood projected the following for the third quarter. 

Hood said a stronger US dollar will hit revenue growth by 2%, but still be in double digits. She said demand for cloud AI offerings should remain strong. Intelligent Cloud revenue should grow between 19% to 20% with Azure delivering growth of 31% to 32%. 

She added that by the end of the year Azure capacity should be in line with near-term demand. Here's the outlook. 

By the numbers for the second quarter:

  • Microsoft 365 Commercial cloud revenue growth was up 16% from a year ago.
  • LinkedIn revenue was up 9%.
  • Dynamics 365 revenue was up 19%.
  • Microsoft 365 had 86.3 million consumer subscribers.

Constellation Research analyst Holger Mueller said:

"Microsoft is in full transfer of product to services revenue, with product revenue down $2.5 billion year over year, but services (and other) revenue up by more than $10 billion. Services come with a higher cost and cost or revenue is up by more than $2 billion. The result is 30c ents higher EPS. The question is how many quarters can Microsoft repeat the feat – especially as Satya Nadella and Amy Hood for the first time acknowledged capacity challenges for Azure. The next quarter will tell."

Here's the breakdown of Microsoft by product line.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

ServiceNow launches AI orchestrator, partners with Google Cloud, Oracle, reports Q4 results

ServiceNow launched AI Agent Orchestrator, part of a set of tools designed to pit the company's Now Platform in the middle of agentic AI.

The company's move is designed to make the ServiceNow platform as a central place to manage and govern AI agents. To ServiceNow, AI Agents are an autonomous extension of its focus on workflows and process automation.

ServiceNow has been building out its AI agents capabilities including the most recent acquisition of Cuein, which manages AI and human chat interactions. Constellation Research analyst Liz Miller said Cuein shows ServiceNow is focused on agentic AI “improvement and optimization rather than reporting and postmortems."

The company starting with its Xanadu release began scaling AI agents built into its platform. ServiceNow AI agents are built to leverage data across multiple systems. ServiceNow CEO Bill McDermott said the ServiceNow Platform is designed to be an "AI agent control tower to unlock exponential productivity and seamlessly orchestrate end‑to‑end business transformation."

Speaking on ServiceNow's earnings conference call, McDermott said that the innovation is moving to the business value layer. "With the precipitous drop in LLM compute costs, there is much more capital allocation available for the business impact layer," said McDermott, who noted that falling model costs will be a win for the company. "Our position at the center of data AI agents, workflow, orchestration and enterprise governance is the nexus of AI's massive value creation opportunity," he said. 

Here's a look at what ServiceNow announced on the agentic AI front:

ServiceNow AI Agent Orchestrator. The company said AI Agent Orchestrator is designed to enable inter-agent communication and centralized coordination. The focus is on ensuring AI agents can share information and hand off tasks at multiple parts of the process. AI Agent Orchestrator can manage custom AI agents.

AI Agent Studio. ServiceNow said AI Agent Studio can help enterprises create and deploy custom AI agents that are integrated with workflows and the Now Platform. Teams built by AI Agent Studio can be managed by AI Agent Orchestrator.

AI Agent updates for Pro Plus and Enterprise Plus customers. AI Agent Orchestrator and AI Agent Studio will be included in Pro Plus and Enterprise Plus plans with no additional charge. AI agents will be priced on consumption.

Partnerships with Google Cloud, Oracle, Visa

ServiceNow also expanded relationships with Google Cloud, Oracle and Visa.

ServiceNow said its expanding its partnership with Google Cloud to add the Now Platform to Google Cloud Marketplace. Select ServiceNow applications will be available for regulated industries on Google Distributed Cloud.

In addition, ServiceNow will integrate with Vertex AI, Google Workspace and BigQuery to connect enterprise workflows with Google Cloud end-user applications. The combination of Google Cloud BigQuery and Workflow Data Fabric will make it easier to create and manage AI agents.

For ServiceNow, the Google Cloud Marketplace will add additional distribution for its customer relationship management, IT service management and security incident response applications.

The companies said they will combine go-to-market efforts with ServiceNow CRM and Customer Engagement Suite with Google AI, and make ServiceNow data easier to access within Google Cloud's Workspace.

ServiceNow said the Google Cloud Marketplace integrations will roll out through the second quarter and third quarter. ServiceNow CRM and Customer Engagement Suite with Google AI will launch later this year as will ServiceNow CRM, ITSM and SIR on Google Distributed Cloud.

With Oracle, ServiceNow will integrate its Workflow Data Fabric with Oracle data sources so there's zero copy bi-directional data exchange. Oracle customers will be able to retrieve data from ServiceNow and vice versa.

Specifically, ServiceNow said Workflow Data Fabric will integrate seamlessly with Oracle Autonomous Database and Oracle Database 23ai. ServiceNow customers will be able to access structured and unstructured data directly from Oracle sources.

The Oracle integration with Workflow Data Fabric will be available in the second half of 2025.

ServiceNow said it has also expanded a partnership with Visa to streamline payment dispute workflows for financial institutions via ServiceNow Disputes Management, Built with Visa.

The companies said they will use genAI tools to automate dispute resolution. ServiceNow and Visa announced their partnership last year.

Consumption model pivot, Q4 results

ServiceNow reported in line fourth quarter results and noted that it would pivot to more of a consumption business model.

The company reported fourth quarter earnings of $384 million, or $1.83 a share, on revenue of $2.96 billion, up 21% from a year ago. Non-GAAP earnings were $3.67 a share.

For 2024, ServiceNow reported net income of $1.425 billion, or $6.84 a share, on revenue of $10.98 billion, up 22%.

As for the outlook, ServiceNow said it was getting hit by a strong US dollar that will ding subscription revenue by about $175 million in 2025. ServiceNow also said its US federal business will be more back-end loaded due to a new presidential administration.

In addition, ServiceNow said it will pivot to more of a consumption based model, which usually requires a transition period and revenue disruption. ServiceNow projected 2025 revenue between $12.63 billion to $12.67 billion, up 18.5% to 19%. Non-GAAP revenue growth for 2025 will be 19.5% to 20%.

The company said:

"In 2025, we will begin shifting more of our business model to include elements of consumption-based monetization across our AI and data solutions. For instance, we will include our new AI Agents in our Pro Plus and Enterprise Plus SKUs, forgoing upfront incremental new subscriptions to instead drive accelerated adoption and monetize increasing usage over time. We are also optimizing certain aspects of our go-to-market approach and creating more integrated solutions that we will announce at Knowledge 2025. Our guidance prudently reflects the flexibility to make these moves while delivering further free cash flow generation."

McDermott explained the role of consumption-based models. 

"We have predicted and protected that our seat based subscription staying as is. It's a foundation you can feel secure in. We are also included a massive upgrade path to Pro Plus, RaptorDB and Workflow Data Fabric. Seat-based, subscription is still there and then we have these upgrade paths to the new innovation. Customers still like the predictability of this approach, and they're committed to long-term transformation on our platform so we are also enabling elements of consumption based pricing as AI agents become a potent value driver for the enterprise. While we could have launched an additional SKU and offered AI agents as an add-on to drive more immediate revenue growth, our strategy prioritizes accelerating adoption."

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity servicenow AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

DeepSeek: What CxOs and Enterprises Need to Know

#DeepSeek: What CxOs and enterprises need to know ⬇️💡

DeepSeek has become an overnight sensation, rattled the US #AI sector, and may have single-handedly focused CxOs on the cost of #genAI. We convened a call of Constellation Research analysts to outline the issues CxOs need to know about when it comes to DeepSeek.

Watch the full conversation and read the article summary by Larry Dignan here ➡️ https://www.constellationr.com/blog-news/insights/deepseek-what-cxos-and-enterprises-need-know

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/ztkiR_cod44?si=q-edlD6vC-Si-pEf" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Starbucks aims for 4 minute barista to customer handoff process to boost CX

Starbucks is leaning in on process improvement for mobile orders, optimization and technology to get wait times down to 4 minutes in most of its cafes, said CEO Brian Niccol.

Niccol, who joined the company from Chipotle and outlined Back to Starbucks plan to reinvigorate the brand and sales, talked process on the company's first quarter earnings call. The efforts at Starbucks are worth watching given that they reside at the intersection of process, customer and employee experience and omnichannel retail.

Starbucks said the company is investing in labor, marketing, technology and stores to stabilize the business and revamping support teams to execute on its Back to Starbucks plan

Speaking on a conference call, Niccol said:

"The handoff from our barista to the customer is our brand moment of truth, and we've been working hard to get that moment right. Through the quarter, we've continued to test and learn as we position the business to achieve our four-minute throughput goal with a moment of connection."

Niccol added that order sequencing is everything and has created more of a bottleneck than capacity. "Investments in staffing and deployment, processes and algorithm technology demonstrate the greatest opportunity to deliver a four-minute wait time in most of our cafes," he said.

To improve the process, Starbucks is:

  • Optimizing labor with precision scheduling and adding coverage hours.
  • Simplifying beverage builds with new brewed coffee and tea routines.
  • Improve processes in-store and via mobile ordering. Starbucks is reducing its menu selections by 30% in both food and beverages.
  • Starbucks also optimized its supply chain to fund further investments.
  • Creating a Chief Store Officer role to "be all about driving excellence in our stores."
  • Betting that improvements in the partner experience boosts customer experience.

Niccol said:

"Looking forward, we're beginning to pilot a new in-store prioritization algorithm and are exploring other technology investments to improve order sequencing and our efficiency behind the counter. We're also progressing efforts that build on the strength and popularity of the Starbucks app. This includes development of a capacity-based time slot model that allows customers to schedule mobile orders and a midyear update that will simplify customization options, improve upfront pricing, and provide real-time price changes as customers customize beverages.

Lastly, we're planning to fully deploy digital menu boards in cafes across our US company-owned stores over the next 18 months to make our offerings more easily understood and to better show customization add-ons."

The working theory is that Starbucks can improve the customer experience, simplify and drive repeat business.

Niccol added that it's still early in the process. Starbucks’ earnings in the first quarter met expectations, but indicate the company has a lot more work to do.

Starbucks first quarter revenue was $9.4 billion, flat compared to a year ago. US same store sales fell 4% in the first quarter, but showed improvement through the quarter. Ticket growth in the US was up 4% and Starbucks curbed discounting.

 

Next-Generation Customer Experience Innovation & Product-led Growth Data to Decisions Future of Work New C-Suite Tech Optimization Digital Safety, Privacy & Cybersecurity B2B B2C CX Customer Experience EX Employee Experience business Marketing eCommerce Supply Chain Growth Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Customer Service Content Management Collaboration M&A Enterprise Service AI Analytics Automation Machine Learning Generative AI ML LLMs Agentic AI SaaS PaaS IaaS Healthcare Chief Customer Officer Chief Information Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Growth Officer Chief Marketing Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer Chief Supply Chain Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer

DeepSeek: What CxOs and enterprises need to know

DeepSeek has become an overnight sensation, rattled the US AI sector and may have single-handedly focused CxOs on cost of genAI.

We convened a call of Constellation Research analysts to outline the issues CxOs need to know about when it comes to DeepSeek.

Here are some recent headlines what you need to know about DeepSeek.

What is DeepSeek?

DeepSeek is a Chinese AI company that develops open-source large language models. It has launched a series of models that can compete with the likes of OpenAI's ChatGPT, Anthropic's Claude family of models and Meta's Llama. Constellation Research CEO Ray Wang said DeepSeek has "democratized the access to AI." Wang noted that the other benefit is that the model can run in private environments without top-of-the-line hardware. DeepSeek is censored as anyone who has asked the service about Winnie the Pooh or other sensitive topics in China.

What's the big deal about DeepSeek?

The hubbub surrounding DeepSeek in a nutshell is that the company "proved a point that you don't need gazillion dollars to train AI model," said Constellation Research analyst Andy Thurai.

DeepSeek has "proven not only that you can find cheap but also the fact that you can open source the entire thing, which means others can start using it or building it, which is going to challenge all those big guys," said Thurai.

DeepSeek also garnered a lot of attention because Wall Street decided a week after its latest model release that perhaps Nvidia customers didn't need the latest and greatest GPUs.

What did DeepSeek do that was different?

Holger Mueller, analyst at Constellation Research, said:

"Not having the best computing resource always makes for better models and software. China doesn't have availability of so many GPUs and people get creative. The distillation really worked. The second really important thing is that DeepSeek has been training about human intervention."

What's unclear is how much DeepSeek piggybacked off of larger models and IP from around the globe. "It's going to be interesting to see what kind of IP battle is going to unfold," said Thurai.

What should CXOs do?

For now, it's best to monitor DeepSeek, think through use cases and if you experiment make sure it's air gapped and sandboxed. Don't ignore the DeepSeek developments though. Constellation Research analyst Chirag Mehta said:

"If you're a CxO, the best analogy is what open source did to the industry. That's what this model is now doing to its competitors.

If you're a CxO, you have two options: Buy the Ferrari in the high-end platform as a service model or do smaller, specialized, narrow models that are cheaper to run, and almost free. Open source is not quite free. You still have to manage it, and you still have to run it, and you have to maintain it."

Mehta said CxOs need to keep their model options open and stay focused on what problem you're trying to solve with genAI.

Wang said focus on the cost curve. Wang said:

"At this point, we know that it's possible to do reasoning at a lower cost and lighter models are going to be available. We know that people are going to want to do this outside of the cloud and back on premises. The cost curve is coming down on AI, and I think you're going to see more of that. And I think those monetization models are important."

Will DeepSeek mean on-premises AI?

The jury is decidedly mixed on this one. Holger Mueller said AI workloads will reside in the cloud for the most part. "I see still larger models winning and cloud winning. If you might see a dip in revenue. That's totally possible," said Mueller.

Mehta noted that on-prem vs. cloud AI isn't zero sum, but the majority of workloads will go to the cloud with the exception of edge computing use cases.

Should Wall Street be this concerned about DeepSeek?

Thurai said concerns are overblown. If you are building an LLM or using one for inferencing you're still likely to use a Nvidia stack. Where it gets interesting is if DeepSeek used AMD GPUs. "This is a knee jerk reaction and it's going to continue for a while," said Thurai.

Wang said the concerns are more about big spending tech giants and whether the capex will be questioned. He said:

"We have to figure out if it makes sense for a Microsoft to spend $80 billion a year on capex to build out data centers. The short answer is that Microsoft is has to do it. It's really about the payback period that that's going to actually hit them. The second question is whether we need to pay this much for token economics.

"We're living in a world we call exponential efficiency. If you're not 10 times better one tenth the cost, nobody cares. And we're at this point where our existing software vendors have made life so expensive to hold their stock price. So this reset is a good thing in general, because it's going to lower the cost of technology for customers. It's a bad thing for stock investors, because we're going to see valuations at the top plummet if you're not one of the winners."

More:

What are the security concerns with DeepSeek?

Mehta said there are concerns about prompt injection and jailbreaking DeepSeek. "AI security is one of the biggest topics for CxOs," said Mehta. "If you don't know how the model has been trained, what data has been used, and how easy or difficult it's going to be to actually break it, do you really want to use that model for your most sensitive data and use cases? Are you really going to do that?"

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Transformative Power of AI Agents in the Enterprise | With Workday CTO Jim Stratton

Don't miss another #Davos2025 conversation, this time between R "Ray" Wang and Workday CTO Jim Stratton. They discuss role-based #AI agents becoming full-fledged members of the #digital workforce and driving real ROI 📈 for Workday customers by automating #business workflows in HR, finance, and procurement.

Statton emphasizes the importance of balancing human and machine decision-making -- agents handling repetitive tasks so employees can focus on higher-level, strategic work. Both parties agree that companies must evolve workforce management practices to govern this new digital workforce effectively.

Watch the full conversation and let us know your thoughts on the future of AI agents! #WEF25

On <iframe width="560" height="315" src="https://www.youtube.com/embed/m-dlLjx2Nqc?si=_aizxWC50L-Or9Z7" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

SAP preps AI agent move, extends on-prem maintenance, sets 2025 outlook

SAP CEO Christian Klein said the company will significantly increase its AI investments, launch AI agent innovations Feb. 13 and give customers a three-year reprieve on migrating from on-premises ERP to the cloud.

Klein teased the AI agent orchestration launch as well as licensing changes during SAP's fourth quarter earnings call. He said that customers are spending half of their IT budgets on data and analytics and falling short of leveraging data.

SAP "will harmonize structure and unstructured data" across SAP and non-SAP data with relevant semantics. "We will make AI agents much more powerful," said Klein. "Joule will become the super orchestrator of these agents, carrying out complete tasks autonomously and end to end, taking over significant workload from humans."

On the licensing front, Klein said "we will introduce licensing options that allow customers to upgrade and switch easily to our newest cloud solutions across the whole SAP Business Suite without additional negotiations." In short, SAP will roll out a maintenance offering that will give customers the ability to extend maintenance through 2033 from 2030 if they can't migrate to the cloud completely.

DSAG: SAP's innovation focus on cloud, discriminates against on-premise users

Although SAP's fourth quarter earnings and outlook were notable, Klein's comments about AI strategy and licensing were more notable. Klein, who was referring to US AI industry concerns about DeepSeek and cheaper models, noted that SAP can roll with multiple models as needed. The value is in business data, he added.

Klein said SAP is flexible with large language model choices and is betting its AI strategy on contextual data. SAP said half of its cloud deals included AI in the fourth quarter and 2025 cloud revenue will be up 26% to 28%.

"We are flexible when it comes to AI infrastructure and large language modules. We benefit from cost reductions and progress in the LLM space, because we are truly differentiating an element in AI today. However, it is deep process and industry knowhow, combined with access to unique context, rich business data, so value equation is more and more moving up the application layer and to building one semantical data layer, this is exactly what SAP has been focusing on."

On a conference call, Klein touted the "new SAP," which shows strong cloud growth and the ability to leverage Business AI through its suite. "Land and expand is clearly working," said Klein, who said a fifth of customers are using more than one of SAP's solutions. "We are doubling down on AI in 2025."

SAP said it is positioning Joule as the new UI for its software.

Klein said SAP brought more than 130 genAI use cases to customers in 2024. Internally, Klein said SAP is using AI internally to be more efficient with 20,000 SAP developers using AI tools, average efficiency gains of 20% on go-to-market activities and contract booking time improvements of 75%. SAP also saw a 20x productivity gain due to AI assisted go-to-cash process automation and has saved €300 million due to AI implementations.

Fourth quarter results and outlook

SAP said fourth quarter revenue was €9.48 billion, up 11% from a year ago, with net profit of €1.62 billion, or €1.37 a share. Adjusted earnings were €1.40 a share.

For 2024, SAP said revenue was €34.18 billion, up 10% from a year ago, with earnings of €3.15 billion, or €2.68 a share. Earnings were down due to restructuring charges. Adjusted earnings were €4.53 a share.

Klein said that SAP finished the year strong and showed the ability to migrate customers to the cloud.

In the fourth quarter, SAP said cloud backlog grew 32% to €18.08 billion. Backlog gained due to Cloud ERP Suite revenue.

However, SAP did note that it expects current cloud backlog growth to "slightly decrease in 2025."

SAP said it projected cloud revenue of €21.6 billion to €21.9 billion. Cloud and software revenue for 2025 will be up 11% and non-IFRS operating profit will be up 26%. The company sees about €8 billion in free cash flow, nearly double from 2024.

For non-financial metrics, SAP plans to improve its Net Promoter Score to 16, up from 12 in 2024.

Don't leave customers behind

Klein said SAP can navigate macro-economic concerns in key industries such as automotive, which is struggling. Klein said SAP customers are navigating many economic and geopolitical concerns and that industries are moving to the cloud, but need time.

The upshot is that Klein said SAP is willing to play the long game with customers and extend maintenance for those that can't get to the cloud to 2033, a three-year reprieve.

SAP's new maintenance program was reported in Handelsblatt and outlined by a consultant.

"We want to be reasonable in our outlook, how we also reflect this facing of these wise deals, because customers need time. I mean, changing business modules is not only a technological move, but also about change management sometimes," said Klein.

Infosys sees 'good traction' with SAP S/4HANA migrations

Klein also acknowledged that some customers aren't going to make its deadline to go move to SAP/HANA.

He said:

"The end of maintenance by 2027 will not be changed. We will stick to that. But you also have to consider, in some parts of the stack, there are third party components included, and they are running out of maintenance as well. We don't want to leave the customers behind. As we moved all of our cloud solutions already on SAP/4HANA Cloud, we do now the same with these on premise customers. We move them to the cloud, we replace the third party components, and with that, there are 100 ERPs supported in a complete, sustainable and supported way. And that is about this offering.

It's actually for a very few large customers who will not making the time. To transform and consolidate ERP and business processes in over 100 countries is sometimes not that easy. It's not about the extension of on-premise maintenance, but it's really reaching out with helping hands with a very few large customers."

Executive changes

SAP also made executive changes.

The company said that Sebastian Steinhaeuser has been appointed to the Executive Board to lead Strategy & Operations, a new board area. Steinhaeuser will be focused on simplifying SAP operations.

SAP also extended the contract of Thomas Saueressig, head of Customer Services & Delivery, for another three years until 2028.

According to SAP, it will also form an Extended Board that will include Philipp Herzig, who takes over as global CTO in addition to Chief AI Officer, as well as two co-chief revenue officers.

Jan Gilg and Emmanuel (Manos) Raptopoulos will co-lead SAP's Customer Success organization as new CROs.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience SAP AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

FODN Podcast ep.4 - Unlocking AI’s ROI Potential: Insights from Ray Wang

 

On this episode of the Future of Decisions Now podcast, R “Ray” Wang — Principal Analyst, Founder, and Chairman of Constellation Research —- discusses agentic-powered decision intelligence and its exponential advantage in the marketplace, revealing the one factor that will define the leaders who will master AI and unlock its value.

On <iframe width="560" height="315" src="https://www.youtube.com/embed/KKUpHNq9YNc?si=4Vu5RacRf0lYknsw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Zoho CEO Sridhar Vembu steps back, becomes Chief Scientist

Zoho CEO Sridhar Vembu said he will step down as CEO and become Chief Scientist at the company "responsible for deep R&D initiatives."

Vembu announced the move on X. He also said he will focus on rural development too. Vembu said:

"The future of our company entirely depends on how well we navigate the R&D challenge and I am looking forward to my new assignment with energy and vigor. I am also very happy to get back to hands on technical work."

Zoho co-founder Shailesh Kumar Davey will become CEO. Co-founder Tony Thomas will lead Zoho's US business. Rajesh Ganesan will lead our ManageEngine division and Mani Vembu will lead the Zoho.com division.

Under Vembu, Zoho has cultivated a unique culture that has created disruptive technology in the SaaS market, a model that generates value vs. larger rivals, and builds up offices in smaller communities where it can have a larger impact.

In an interview last year, Vembu outlined the approach.

"It's really about getting closer to the customers at one level, and just talking about business first. But beyond that, it's also putting back into communities that income needed from technology, create good jobs, in smaller communities, which people are not doing enough."

In that interview, Vembu also noted that Zoho has a strong bench. Vembu said Zoho can't run out and hire and have a productive employee in two weeks. "We believe that first we have to nurture the talent and that automatically brings in good people to us over time," he said.

 

Data to Decisions Future of Work Innovation & Product-led Growth zoho Chief Information Officer

GenAI prices to tank: Here’s why

Prices for access to generative AI models and features could tank in 2025 amid cheaper foundational models, open source advances and vendors becoming realistic about what customers will actually pay.

Welcome to the only thing in your life that'll be deflationary--generative AI. We're only a month into 2025, but there have been a series of moves indicating that the generative AI gravy train is unlikely because customers will soon be focused on cost per query.

Here are a few reasons why enterprise AI is likely to become less expensive.

Wither add-ons?

Rewind to a year ago and Wall Street was busy modeling revenue growth due to $30 per month per user add-ons for access to generative AI features. Later, those add-on prices became more like $20, but the working theory from vendors was that enterprises will pay for AI that drives productivity.

During this time, the vendors that went for genAI usage and raised prices on their overall SKUs were mocked. Wall Street analysts were almost indignant. When are you monetizing genAI? Zoom, Adobe and Workday all got flack for noting what was kind of obvious--genAI is a feature not the end game.

Fast forward to January 2025 and copilot add-ons are toast. Oracle just launched AI agents for its sales applications without charging extra. Google Workspace dropped Gemini add-on charges, but raised business and enterprise plan prices. Microsoft launched Copilot Chat and now includes copilots in Microsoft 365 in many cases. There are charges for AI agents, but many genAI features will be bundled into enterprise plans.

Sign up for Constellation Insights newsletter

Microsoft did something similar with its consumer Microsoft 365 plans and now copilot is everywhere (even if you didn't want it). For what it's worth, I'm more of a Notepad person. Like my appliances, I often like my tools to be on the dumb side. Don't bog me down with features I don't need or want.

Now that Google and Microsoft have ditched the add-on game, rest assured that other software vendors will follow. The argument that genAI is all just part of the software has won out.

Sure, you'll be hit with other AI charges and consumption models, but the add-on game is over.

LLM pricing is going to collapse

The big news this week is that a Chinese AI company called DeepSeek set out to blow up OpenAI's business model. DeepSeek, combined with other open source large language models, is going to be a real threat to model pricing, which revolves around tokens and API calls. ByteDance, owner of TikTok, also released Doubao 1.5 Pro, a model with strong performance.

Yes, there are concerns about censorship on DeepSeek and Doubao 1.5 Pro, but the idea that you can get API access to DeepSeek's R1 model for 14 cents for a million tokens compared to OpenAI's $7.50 is disruptive. OpenAI is clearly positioned as a premium LLM model, but that pricing is disruptive for a company that can't make money on a $200 a month subscription plan.

This DeepSeek news landed on Monday and was overshadowed by an inauguration in the US featuring technology bigwigs, AI infrastructure plans and chatter out of Davos and earnings season.

Nevertheless, it's worth taking DeepSeek for a spin. Many of these models have caught up to what OpenAI can do and are likely good enough for most use cases.

Some reading:

Simply put, LLM margin compression is here. The scary part is the LLM giants didn't have profit margins to begin with. LLMs are going to commodity in a hurry.

AI agents will require price transparency

Although vendors are wrapping in genAI as a bundle, the agentic AI as labor replacement/augmentation rap is just starting.

Enter the consumption model. Enterprises (allegedly) will pay for agentic AI conversations, problems solved and value created by the simple fact companies won't need to add a human.

The problem: The SaaS vendors pushing this agentic AI consumption model aren't used to the level of transparency needed yet.

Cloud providers have consumption dashboards, more transparent pricing and ways to manage costs. SaaS vendors simply don't.

Once consumption becomes part of the mix, SaaS vendors will have no choice but to be more transparent.

Today, SaaS packaging and contracts are complicated and the sales cadence is almost engineered so enterprises make tactical errors along the way. Companies will need to know exact costs by AI use case especially with agentic AI.

We'll have a hybrid seat, subscription and consumption model for the foreseeable future, but ultimately SaaS vendors are going to have to show the math behind the pricing. Value would be nice too.

What's next?

As early genAI pricing models are disrupted, you'll most SaaS vendors want to become platforms, LLMs bundled with broader software suites and a more holistic sales pitch.

SaaS vendors will want to be more like platforms that enterprises use to leverage AI. ServiceNow is already seen this way, but look for many vendors to follow the same path. What's unclear is whether CxOs who have been cross-sold to oblivion will suddenly think their vendors are platforms.

Meanwhile, LLM visionaries are going to attempt to look more like SaaS companies.

One thing that caught my eye out of Davos was Mistral's take that enterprises will move away from models and to systems. Mistral CEO Arthur Mensch told CNBC that models are merely part of systems that include data and tools that act as agents.

Cohere launched North, an AI platform that combines LLMs, search and agents in one collaboration platform. Anthropic is adding collaboration features as it expands use cases for its Claude models. OpenAI is also expanding but has largely focused on expanding into search to compete with Google.

If you follow the LLM players, it's clear that they think foundational models alone aren't going to pave the way to profitability.

The other thread to watch is how the hyperscalers fare in the agentic AI age. Consumption pricing is already there and AI makes it a lot easier for cloud giants to go vertical as well as horizontal.

In the end, 2025 is going to be a year where enterprise vendors attempt multiple models. And margin compression may be inevitable.

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer