Results

ServiceNow aims for 'Goldilocks' software model, SaaS industry likely to follow

ServiceNow aims for 'Goldilocks' software model, SaaS industry likely to follow

ServiceNow took the plunge as it pivots its business model to a hybrid approach with seats, subscriptions and consumption blended together.

The move, outlined on ServiceNow’s fourth quarter earnings call, isn't surprising given agentic AI needs to be priced into plans somehow, but enterprises need budget predictability. ServiceNow isn't the first mover here, but is likely to trigger a rush to a hybrid software model. Salesforce has also been noting a move to a consumption-based business model with Agentforce and Microsoft launched pay-as-you-go agents.

For enterprises, the big question is whether this hybrid business model is going to be a win. At the very least, enterprises will have to manage their agentic AI consumption to keep costs in line. Like the move to cloud computing, enterprises can plan on getting hit with a few zingers. SaaS providers will need to add more transparency into consumption just like hyperscale cloud providers do.

Speaking on ServiceNow's earnings conference call, CEO Bill McDermott portrayed the seat-subscription-consumption model as a Goldilocks scenario for the vendor and customers.

McDermott said:

"Our goal is to combine both subscription and consumption pricing. Customers can start with a base subscription, which they like. They want that flag in the ground, so they can predict their spend and their current ROI schemes. But then, they obviously want to take advantage of agentic AI and yet at the same time, the industry is early in its formation. We're actually innovating faster than they have deployed it. So they want to scale with us in harmony and in partnership.

With our Pro Plus version, they'll get access to our agentic AI agents and will give them a meter based pricing methodology where they will take out the soul crushing business process work that is tedious and complex that people actually don't even want to do. Agentic AI agents will do that for them. They will see a very nice ROI on that. And by definition, if the meter is running up, that means they're using it and deriving financial gain from it, and they're happy to pay and share with us the profits.

It's the Goldilocks model where you get it both ways."

Amit Zavery, ServiceNow's product chief, elaborated on the consumption model. "It's not completely like pay as you go per meet per individual assist. It's really packs of assist in a way. It's subscription pricing and we are giving them some flexibility and the ability for customers to see value instantly," he said.

The AI agent pack approach rhymes with how Adobe prices Firefly. You get tiers of credits. Salesforce has floated the idea of $2 per resolved conversation, but it's unclear whether that's a trial balloon or not. If you buy that an AI agent is a human replacement $2 per resolved issue makes sense. Over time, it's likely that agentic AI is more of a feature and process automation than human replacement.

Salesforce President and Chief Operating Officer Brian Millham outlined the consumption model in December at a Barclays investor conference. He said:

"As we think about the consumption world, it's very different than going out and selling a customer 500 licenses of Sales Cloud or Service Cloud. We're convincing them that Agentforce is the future. They're buying Agentforce from us, but we'll monetize it through a consumption model going forward.

New capabilities that we have on pay as you go, giving people insights into how they're using the product, term commits like AWS where they make a commitment to usage over time, but you've got to burn through that during the term of the agreement. We think this is additive to a model that we've had forever, which is name license plus this consumption model will really drive some growth going forward."

For ServiceNow, and any SaaS vendor, the trick will be getting agentic AI adoption, use cases and value that can be shared. ServiceNow isn't forgoing subscription revenue with a hard pivot to consumption, but it will take time to build up the additional revenue stream.

A few observations:

  • This hybrid approach makes sense for the vendor and the buyer, but it will be an adjustment. Enterprises will want more visibility and transparency and SaaS vendor deals have become murky.
  • Enterprises won't be totally new to consumption models since AWS, Google Cloud and Microsoft Azure have trained enterprises on consumption models. Databricks and Snowflake are also consumption based. 
  • The consumption bookkeeping will be challenging if a company takes a multi-vendor approach to AI agents.
  • There will be tension with customers since SaaS vendors have already gobbled up too much of the operating expense budget.
  • To track this consumption, it's likely that SaaS vendor deals will be procured through cloud marketplaces. For instance, enterprises may choose to monitor consumption through one dashboard via AWS or another hyperscaler.
  • This model won't be Goldilocks for every enterprise, but it's the approach that'll become the norm for the foreseeable future.
  • It's unclear what the agentic AI value equation turns out to be. I'm not sure the digital labor argument will hold up especially if consumption surges to the point where AI agents are comparable to human-per-hour costs.
Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer Chief AI Officer Chief Product Officer

AWS, Microsoft Azure, IBM watsonx.ai add DeepSeek models via custom import

AWS, Microsoft Azure, IBM watsonx.ai add DeepSeek models via custom import

Amazon Web Services said enterprises and developers can take DeepSeek's R1 model for a spin on Amazon Bedrock via its Custom Model Import feature. IBM also said it will add DeepSeek R1 models to watsonx.ai via its Custom Foundations Model feature and Microsoft Azure made a similar move. 

DeepSeek, a Chinese AI startup that has torched US AI stock valuations such as Nvidia, has released models that can perform as well as pricier foundation models for a fraction of the cost.

That price compression has spurred a flurry of opinions about how DeepSeek may affect the broader market. Price compression is highly likely.

DeepSeek: What CxOs and enterprises need to know

For AWS, which started with a LLM agnostic strategy, adding something like DeepSeek to Amazon Bedrock isn't a concern. In a community article, AWS said Bedrock's custom import feature can be used to leverage DeepSeek. AWS is also holding a webinar on deploying DeepSeek models on Bedrock

Key items in the walkthrough include:

  • The Custom Model Import feature allows you to use externally fine-tuned models on Bedrock's infrastructure.
  • Your DeepSeek R1 model should be based on supported architecture, such as Llama 2, Llama 3, Llama 3.1, Llama 3.2, or Llama 3.3.
  • Prepare your model files in the Hugging Face format and store them in Amazon S3.

Holger Mueller, Constellation Research analyst, said:

"AWS wastes no time to keep its 'Switzerland' status when it comes to being home for all LLMs - large and small - as it supports DeepSeek in AWS Cloud. With CISOs probably concerned about any enterprise access - there is likely interest in the AI / Data Science community." 

More:

IBM followed the AWS with a similar custom import approach. IBM said it will add DeepSeek R1 models to watsonx.ai via its Custom Foundations Model feature. The feature is similar to what AWS has in Bedrock, but IBM said DeepSeek R1 models can be based on Llama or Qwen architectures. Qwen models are created by Alibaba. 

The watsonx.ai workflow is similar to the custom import on Bedrock. 

Developers need to prepare DeepSeek files and bring the model into IBM Cloud Object Storage. From there, the model needs a config.json file as well as be in a safetensors format before deployment. 

Microsoft said DeepSeek R1 is available in the Azure AI Foundry catalog and GitHub. In a blog post, Microsoft emphasized that DeepSeek models were put through its paces. 

"DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. With Azure AI Content Safety, built-in content filtering is available by default, with opt-out options for flexibility. Additionally, the Safety Evaluation System allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently deploy AI solutions."

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Meta on DeepSeek, custom silicon, AI optimizing engineering and business

Meta on DeepSeek, custom silicon, AI optimizing engineering and business

Meta reported strong fourth quarter results, but the earnings call was much more interesting as CEO Mark Zuckerberg and CFO Susan Li riffed on custom silicon, developing Llama 4 and why building AI infrastructure matters.

The company reported fourth quarter revenue of $48.4 billion, up 21% from a year ago, with net income of $20.84 billion. For 2024, Meta raked in net income of $62.36 billion on revenue of $164.5 billion.

Holger Mueller, analyst at Constellation Research, said Meta is set up financially to invest heavily in AI. 

"Things are going well for Meta, as its business is fundamentally healthy. Despite all the investments, Zuckerberg’s enterprise was able to grow revenue year over year by over $30 billion, but grew profit at the same time $23 billion. Meta earns three quarters on an additional dollar of revenue today and that KPI did not look that favorable in the past. Zuckerberg can keep investing into AI, the metaverse, which could be accelerated by AI, and content creation."

And Meta will invest.

Here's a look at the key takeaways on Meta's investment strategy for AI:

Meta looks at AI as a personalization tool that will have different use cases for each individual. "We believe that people don't all want to use the same AI. People want their AI to be personalized to their context, their interests, their personality, their culture, and how they think about the world," said Zuckerberg. 

Open-source models will win starting with Llama 4. Zuckerberg said: "I think this will very well be the year when Llama and open source become the most advanced and widely used AI models. Llama 4 is making great progress in training. Llama 4 Mini is done and looking good too. It's going to be novel, and it's going to unlock a lot of new use cases."

DeepSeek helps the open-source cause and will bring costs down, but Zuckerberg expects Llama to win. "As Llama becomes more used it's more likely that silicon providers and other APIs and developer platforms will optimize their work more for that and basically drive down the costs of using it," said Zuckerberg. "The new competitor, DeepSeek from China, makes it clear there's going to be an open source standard globally. I think for our kind of own national advantage, it's important that it's an American Standard. We want to build the AI system that people around the world are using. If anything, some of the recent news has only strengthened our conviction that this is the right thing for us to be focused on."

It's too early to know the DeepSeek impact on demand for AI infrastructure. "It's probably too early to really have a strong opinion on what this means for the trajectory around infrastructure and capex and things like that. There are a bunch of trends that are happening here all at once," said Zuckerberg. "I continue to think that investing very heavily in capex and infra is going to be a strategic advantage over time. It's possible that we'll learn otherwise at some point, but I just think it's way too early to call that."

Meta wants AI that will replicate a mid-level engineer. "This is going to be a profound milestone," said Zuckerberg. "Our goal is to advance AI research and advance our own development internally. And I think it's just going to be a very profound thing."

Llama will provide engineering throughput. Li said:

"We expect that the continuous advancements in Llama's coding capabilities will provide even greater leverage to our engineers, and we are focused on expanding its capabilities to not only assist our engineers in writing and reviewing our code, but to also begin generating code changes to automate tool updates and improve the quality of our code base."

The monetization plan for models has nothing to do with licensing or consumption. Zuckerberg noted Meta's plan for AI glasses and investments in AI infrastructure that will improve ads and apps. He said this year will see more growth in Reels on Facebook and Instagram regardless of what happens to TikTok.

Meta AI has more than 700 million active monthly users and updates are planned to deliver more personalized content and monetization efficiency. Meta CFO Susan Li said:

"In the second half of 2024 we introduced an innovative new machine learning system in partnership with Nvidia called Andromeda. This more efficient system enabled a 10,000x increase in the complexity of models we use for ads retrieval, which is the part of the ranking process where we narrow down a pool of 10s of millions of ads to the few 1,000 we consider showing someone. The increase in model complexity is enabling us to run far more sophisticated prediction models to better personalize which ads we show someone. This has driven an 8% increase in the quality of ads that people see."

Meta's capital spending is focused on scaling the footprint and increasing efficiency of workloads. "We're pursuing efficiencies is by extending the useful lives of our servers and associated networking equipment. Our expectation going forward is that we'll be able to use both our non AI and AI servers for a longer period of time before replacing them, which we estimate will be approximately five and a half years. This will deliver savings in annual capex and resulting depreciation expense, which is already included in our guidance," said Li. "We're pursuing cost efficiencies by deploying our custom silicon MTIA in areas where we can achieve a lower cost of compute by optimizing the chip to our unique workloads."

Custom silicon is being used for ranking and recommendation inference workloads for ads and organic content. "We expect to further ramp adoption of MTIA for these use cases throughout 2025 before extending our custom silicon efforts to training workloads for ranking and recommendations next year," said Li. "We're also very invested in developing our own custom silicon for unique workloads where off-the-shelf silicon isn't necessarily optimal, and specifically because we're able to optimize the full stack to achieve greater compute efficiency, and performance per cost and power."

Over time, MTIA is going to take on GPU workloads and training. "Next year, we're hoping to expand MTIA to support some of our core AI training workloads, and over time, some of our Gen AI use cases," said Li. 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM Q4 better than expected, genAI business surges

IBM Q4 better than expected, genAI business surges

IBM delivered better-than-expected fourth quarter results and said its generative AI business including consulting and software is now a $5 billion business, up from $3 billion in the third quarter.

The company reported fourth quarter earnings of $2.98 billion, or $3.11 a share, on revenue of $17.6 billion, up 1% from a year ago. Non-GAAP earnings were $3.92 a share, 15 cents better than Wall Street estimates.

IBM CEO Arvind Krishna said the company is "well-positioned for 2025 and beyond" with annual revenue growth of at least 5%.

For 2024, IBM reported net income of $6 billion, or $6.42 a share, on revenue of $62.8 billion.

By the numbers for the fourth quarter:

  • IBM software revenue of $7.9 billion was up 10% from a year ago with Red Hat revenue up 16%. IBM said that automation revenue was up 15% and data and AI up 4%.
  • Consulting revenue was down 2% to $5.2 billion with the business transformation unit faring the best, but still down 1% in the fourth quarter. Technology consulting revenue was down 7%.
  • Infrastructure revenue was down 7.6% to $4.3 billion.

Krishna made the following points on an earnings conference call:

  • "Our AI portfolio is tailored to meet the diverse needs of enterprise clients, enabling them to leverage a mix of models, IBMs, their own, open models from Hugging Face, Meta and Mistral. IBM's Granite models designed for specific purposes are 90% more cost-efficient than larger alternatives."
  • "We are looking forward to a regulatory environment that is a bit more rational and a bit more pro-competition. So I think what that implies for us is that we think reasonable deals have a very good chance of getting through in a reasonable amount of time and not being held up for years. With that context, we are going to lean in more."
  • "DeepSeek was a point of validation. We have been very vocal for about a year that smaller models and more reasonable training times are going to be essential for enterprise deployment of large language models. We have been down that journey ourselves for more than a year. We see as much as 30 times reduction in inference costs using these approaches. As other people begin to follow that route, we think that this is incredibly good for our enterprise clients." DeepSeek: What CxOs and enterprises need to know

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity IBM AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft Q2: Azure revenue growth of 31%, AI revenue run rate of $13 billion

Microsoft Q2: Azure revenue growth of 31%, AI revenue run rate of $13 billion

Microsoft reported strong second quarter results with revenue growth of 12%, Azure revenue growth of 31% and an AI business annual revenue run rate of $13 billion.

The company reported fiscal second quarter earnings of $24.1 billion, or $3.23 a share, on revenue of $69.6 billion. Wall Street was looking for earnings of $3.11 a share on revenue of $68.78 billion.

Intelligent cloud second quarter revenue was $25.5 billion, up 19% from a year ago. Productivity and business process revenue was $39.4 billion, up 14% from a year ago. Microsoft Cloud revenue was $40.9 billion, up 21% from a year ago.

In a statement, Microsoft CEO Satya Nadella said "we are innovating across our tech stack and helping customers unlock the full ROI of AI."

Nadella addressed multiple topics on the earnings call. Here's a look:

  • He said Microsoft is allocating capital to AI compute as it is seeing "significant efficiency gains in both training and inference for years now." Nadella said: "On inference, we have typically seen more than 2x price performance gain for every hardware generation and more than 10x for every model generation due to software optimization."
  • These efficiency gains for AI workloads will leave to more demand. 
  • "We have more than doubled our overall data center capacity in the last three years, and we have added more capacity last year than any other year in our history. Our data centers, networks, racks and silicon are all coming together as a complete system to drive new efficiencies to power both the cloud workloads of today and the next generation AI workloads."
  • Fabric is Microsoft fastest growing analytics product in the company's history and PowerBI has more than 30 million monthly active users, up 40% from a year ago. 
  • "We are seeing accelerated customer adoption across all deal sizes as we win new Microsoft 365 Copilot customers, and see the majority of existing enterprise customers come back to purchase more seats. When you look at customers who purchase copilot during the first quarter of availability, they have expanded their seat collectively by more than 10x over the past 18 months."
  • 160,000 organizations have used Copilot Studio to collectively create more than 400,000 custom agents in 3 months.

Amy Hood, CFO of Microsoft, added that the company will continue to balance operational discipline with investments in AI and cloud. As for the outlook, Hood projected the following for the third quarter. 

Hood said a stronger US dollar will hit revenue growth by 2%, but still be in double digits. She said demand for cloud AI offerings should remain strong. Intelligent Cloud revenue should grow between 19% to 20% with Azure delivering growth of 31% to 32%. 

She added that by the end of the year Azure capacity should be in line with near-term demand. Here's the outlook. 

By the numbers for the second quarter:

  • Microsoft 365 Commercial cloud revenue growth was up 16% from a year ago.
  • LinkedIn revenue was up 9%.
  • Dynamics 365 revenue was up 19%.
  • Microsoft 365 had 86.3 million consumer subscribers.

Constellation Research analyst Holger Mueller said:

"Microsoft is in full transfer of product to services revenue, with product revenue down $2.5 billion year over year, but services (and other) revenue up by more than $10 billion. Services come with a higher cost and cost or revenue is up by more than $2 billion. The result is 30c ents higher EPS. The question is how many quarters can Microsoft repeat the feat – especially as Satya Nadella and Amy Hood for the first time acknowledged capacity challenges for Azure. The next quarter will tell."

Here's the breakdown of Microsoft by product line.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

ServiceNow launches AI orchestrator, partners with Google Cloud, Oracle, reports Q4 results

ServiceNow launches AI orchestrator, partners with Google Cloud, Oracle, reports Q4 results

ServiceNow launched AI Agent Orchestrator, part of a set of tools designed to pit the company's Now Platform in the middle of agentic AI.

The company's move is designed to make the ServiceNow platform as a central place to manage and govern AI agents. To ServiceNow, AI Agents are an autonomous extension of its focus on workflows and process automation.

ServiceNow has been building out its AI agents capabilities including the most recent acquisition of Cuein, which manages AI and human chat interactions. Constellation Research analyst Liz Miller said Cuein shows ServiceNow is focused on agentic AI “improvement and optimization rather than reporting and postmortems."

The company starting with its Xanadu release began scaling AI agents built into its platform. ServiceNow AI agents are built to leverage data across multiple systems. ServiceNow CEO Bill McDermott said the ServiceNow Platform is designed to be an "AI agent control tower to unlock exponential productivity and seamlessly orchestrate end‑to‑end business transformation."

Speaking on ServiceNow's earnings conference call, McDermott said that the innovation is moving to the business value layer. "With the precipitous drop in LLM compute costs, there is much more capital allocation available for the business impact layer," said McDermott, who noted that falling model costs will be a win for the company. "Our position at the center of data AI agents, workflow, orchestration and enterprise governance is the nexus of AI's massive value creation opportunity," he said. 

Here's a look at what ServiceNow announced on the agentic AI front:

ServiceNow AI Agent Orchestrator. The company said AI Agent Orchestrator is designed to enable inter-agent communication and centralized coordination. The focus is on ensuring AI agents can share information and hand off tasks at multiple parts of the process. AI Agent Orchestrator can manage custom AI agents.

AI Agent Studio. ServiceNow said AI Agent Studio can help enterprises create and deploy custom AI agents that are integrated with workflows and the Now Platform. Teams built by AI Agent Studio can be managed by AI Agent Orchestrator.

AI Agent updates for Pro Plus and Enterprise Plus customers. AI Agent Orchestrator and AI Agent Studio will be included in Pro Plus and Enterprise Plus plans with no additional charge. AI agents will be priced on consumption.

Partnerships with Google Cloud, Oracle, Visa

ServiceNow also expanded relationships with Google Cloud, Oracle and Visa.

ServiceNow said its expanding its partnership with Google Cloud to add the Now Platform to Google Cloud Marketplace. Select ServiceNow applications will be available for regulated industries on Google Distributed Cloud.

In addition, ServiceNow will integrate with Vertex AI, Google Workspace and BigQuery to connect enterprise workflows with Google Cloud end-user applications. The combination of Google Cloud BigQuery and Workflow Data Fabric will make it easier to create and manage AI agents.

For ServiceNow, the Google Cloud Marketplace will add additional distribution for its customer relationship management, IT service management and security incident response applications.

The companies said they will combine go-to-market efforts with ServiceNow CRM and Customer Engagement Suite with Google AI, and make ServiceNow data easier to access within Google Cloud's Workspace.

ServiceNow said the Google Cloud Marketplace integrations will roll out through the second quarter and third quarter. ServiceNow CRM and Customer Engagement Suite with Google AI will launch later this year as will ServiceNow CRM, ITSM and SIR on Google Distributed Cloud.

With Oracle, ServiceNow will integrate its Workflow Data Fabric with Oracle data sources so there's zero copy bi-directional data exchange. Oracle customers will be able to retrieve data from ServiceNow and vice versa.

Specifically, ServiceNow said Workflow Data Fabric will integrate seamlessly with Oracle Autonomous Database and Oracle Database 23ai. ServiceNow customers will be able to access structured and unstructured data directly from Oracle sources.

The Oracle integration with Workflow Data Fabric will be available in the second half of 2025.

ServiceNow said it has also expanded a partnership with Visa to streamline payment dispute workflows for financial institutions via ServiceNow Disputes Management, Built with Visa.

The companies said they will use genAI tools to automate dispute resolution. ServiceNow and Visa announced their partnership last year.

Consumption model pivot, Q4 results

ServiceNow reported in line fourth quarter results and noted that it would pivot to more of a consumption business model.

The company reported fourth quarter earnings of $384 million, or $1.83 a share, on revenue of $2.96 billion, up 21% from a year ago. Non-GAAP earnings were $3.67 a share.

For 2024, ServiceNow reported net income of $1.425 billion, or $6.84 a share, on revenue of $10.98 billion, up 22%.

As for the outlook, ServiceNow said it was getting hit by a strong US dollar that will ding subscription revenue by about $175 million in 2025. ServiceNow also said its US federal business will be more back-end loaded due to a new presidential administration.

In addition, ServiceNow said it will pivot to more of a consumption based model, which usually requires a transition period and revenue disruption. ServiceNow projected 2025 revenue between $12.63 billion to $12.67 billion, up 18.5% to 19%. Non-GAAP revenue growth for 2025 will be 19.5% to 20%.

The company said:

"In 2025, we will begin shifting more of our business model to include elements of consumption-based monetization across our AI and data solutions. For instance, we will include our new AI Agents in our Pro Plus and Enterprise Plus SKUs, forgoing upfront incremental new subscriptions to instead drive accelerated adoption and monetize increasing usage over time. We are also optimizing certain aspects of our go-to-market approach and creating more integrated solutions that we will announce at Knowledge 2025. Our guidance prudently reflects the flexibility to make these moves while delivering further free cash flow generation."

McDermott explained the role of consumption-based models. 

"We have predicted and protected that our seat based subscription staying as is. It's a foundation you can feel secure in. We are also included a massive upgrade path to Pro Plus, RaptorDB and Workflow Data Fabric. Seat-based, subscription is still there and then we have these upgrade paths to the new innovation. Customers still like the predictability of this approach, and they're committed to long-term transformation on our platform so we are also enabling elements of consumption based pricing as AI agents become a potent value driver for the enterprise. While we could have launched an additional SKU and offered AI agents as an add-on to drive more immediate revenue growth, our strategy prioritizes accelerating adoption."

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity servicenow AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

DeepSeek: What CxOs and Enterprises Need to Know

DeepSeek: What CxOs and Enterprises Need to Know

#DeepSeek: What CxOs and enterprises need to know ??💡

DeepSeek has become an overnight sensation, rattled the US #AI sector, and may have single-handedly focused CxOs on the cost of #genAI. We convened a call of Constellation Research analysts to outline the issues CxOs need to know about when it comes to DeepSeek.

Watch the full conversation and read the article summary by Larry Dignan here ?? https://www.constellationr.com/blog-news/insights/deepseek-what-cxos-and-enterprises-need-know

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/ztkiR_cod44?si=q-edlD6vC-Si-pEf" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Starbucks aims for 4 minute barista to customer handoff process to boost CX

Starbucks aims for 4 minute barista to customer handoff process to boost CX

Starbucks is leaning in on process improvement for mobile orders, optimization and technology to get wait times down to 4 minutes in most of its cafes, said CEO Brian Niccol.

Niccol, who joined the company from Chipotle and outlined Back to Starbucks plan to reinvigorate the brand and sales, talked process on the company's first quarter earnings call. The efforts at Starbucks are worth watching given that they reside at the intersection of process, customer and employee experience and omnichannel retail.

Starbucks said the company is investing in labor, marketing, technology and stores to stabilize the business and revamping support teams to execute on its Back to Starbucks plan

Speaking on a conference call, Niccol said:

"The handoff from our barista to the customer is our brand moment of truth, and we've been working hard to get that moment right. Through the quarter, we've continued to test and learn as we position the business to achieve our four-minute throughput goal with a moment of connection."

Niccol added that order sequencing is everything and has created more of a bottleneck than capacity. "Investments in staffing and deployment, processes and algorithm technology demonstrate the greatest opportunity to deliver a four-minute wait time in most of our cafes," he said.

To improve the process, Starbucks is:

  • Optimizing labor with precision scheduling and adding coverage hours.
  • Simplifying beverage builds with new brewed coffee and tea routines.
  • Improve processes in-store and via mobile ordering. Starbucks is reducing its menu selections by 30% in both food and beverages.
  • Starbucks also optimized its supply chain to fund further investments.
  • Creating a Chief Store Officer role to "be all about driving excellence in our stores."
  • Betting that improvements in the partner experience boosts customer experience.

Niccol said:

"Looking forward, we're beginning to pilot a new in-store prioritization algorithm and are exploring other technology investments to improve order sequencing and our efficiency behind the counter. We're also progressing efforts that build on the strength and popularity of the Starbucks app. This includes development of a capacity-based time slot model that allows customers to schedule mobile orders and a midyear update that will simplify customization options, improve upfront pricing, and provide real-time price changes as customers customize beverages.

Lastly, we're planning to fully deploy digital menu boards in cafes across our US company-owned stores over the next 18 months to make our offerings more easily understood and to better show customization add-ons."

The working theory is that Starbucks can improve the customer experience, simplify and drive repeat business.

Niccol added that it's still early in the process. Starbucks’ earnings in the first quarter met expectations, but indicate the company has a lot more work to do.

Starbucks first quarter revenue was $9.4 billion, flat compared to a year ago. US same store sales fell 4% in the first quarter, but showed improvement through the quarter. Ticket growth in the US was up 4% and Starbucks curbed discounting.

 

Next-Generation Customer Experience Innovation & Product-led Growth Data to Decisions Future of Work New C-Suite Tech Optimization Digital Safety, Privacy & Cybersecurity B2B B2C CX Customer Experience EX Employee Experience business Marketing eCommerce Supply Chain Growth Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Customer Service Content Management Collaboration M&A Enterprise Service AI Analytics Automation Machine Learning Generative AI ML LLMs Agentic AI SaaS PaaS IaaS Healthcare Chief Customer Officer Chief Information Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Growth Officer Chief Marketing Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer Chief Supply Chain Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer

DeepSeek: What CxOs and enterprises need to know

DeepSeek: What CxOs and enterprises need to know

DeepSeek has become an overnight sensation, rattled the US AI sector and may have single-handedly focused CxOs on cost of genAI.

We convened a call of Constellation Research analysts to outline the issues CxOs need to know about when it comes to DeepSeek.

Here are some recent headlines what you need to know about DeepSeek.

What is DeepSeek?

DeepSeek is a Chinese AI company that develops open-source large language models. It has launched a series of models that can compete with the likes of OpenAI's ChatGPT, Anthropic's Claude family of models and Meta's Llama. Constellation Research CEO Ray Wang said DeepSeek has "democratized the access to AI." Wang noted that the other benefit is that the model can run in private environments without top-of-the-line hardware. DeepSeek is censored as anyone who has asked the service about Winnie the Pooh or other sensitive topics in China.

What's the big deal about DeepSeek?

The hubbub surrounding DeepSeek in a nutshell is that the company "proved a point that you don't need gazillion dollars to train AI model," said Constellation Research analyst Andy Thurai.

DeepSeek has "proven not only that you can find cheap but also the fact that you can open source the entire thing, which means others can start using it or building it, which is going to challenge all those big guys," said Thurai.

DeepSeek also garnered a lot of attention because Wall Street decided a week after its latest model release that perhaps Nvidia customers didn't need the latest and greatest GPUs.

What did DeepSeek do that was different?

Holger Mueller, analyst at Constellation Research, said:

"Not having the best computing resource always makes for better models and software. China doesn't have availability of so many GPUs and people get creative. The distillation really worked. The second really important thing is that DeepSeek has been training about human intervention."

What's unclear is how much DeepSeek piggybacked off of larger models and IP from around the globe. "It's going to be interesting to see what kind of IP battle is going to unfold," said Thurai.

What should CXOs do?

For now, it's best to monitor DeepSeek, think through use cases and if you experiment make sure it's air gapped and sandboxed. Don't ignore the DeepSeek developments though. Constellation Research analyst Chirag Mehta said:

"If you're a CxO, the best analogy is what open source did to the industry. That's what this model is now doing to its competitors.

If you're a CxO, you have two options: Buy the Ferrari in the high-end platform as a service model or do smaller, specialized, narrow models that are cheaper to run, and almost free. Open source is not quite free. You still have to manage it, and you still have to run it, and you have to maintain it."

Mehta said CxOs need to keep their model options open and stay focused on what problem you're trying to solve with genAI.

Wang said focus on the cost curve. Wang said:

"At this point, we know that it's possible to do reasoning at a lower cost and lighter models are going to be available. We know that people are going to want to do this outside of the cloud and back on premises. The cost curve is coming down on AI, and I think you're going to see more of that. And I think those monetization models are important."

Will DeepSeek mean on-premises AI?

The jury is decidedly mixed on this one. Holger Mueller said AI workloads will reside in the cloud for the most part. "I see still larger models winning and cloud winning. If you might see a dip in revenue. That's totally possible," said Mueller.

Mehta noted that on-prem vs. cloud AI isn't zero sum, but the majority of workloads will go to the cloud with the exception of edge computing use cases.

Should Wall Street be this concerned about DeepSeek?

Thurai said concerns are overblown. If you are building an LLM or using one for inferencing you're still likely to use a Nvidia stack. Where it gets interesting is if DeepSeek used AMD GPUs. "This is a knee jerk reaction and it's going to continue for a while," said Thurai.

Wang said the concerns are more about big spending tech giants and whether the capex will be questioned. He said:

"We have to figure out if it makes sense for a Microsoft to spend $80 billion a year on capex to build out data centers. The short answer is that Microsoft is has to do it. It's really about the payback period that that's going to actually hit them. The second question is whether we need to pay this much for token economics.

"We're living in a world we call exponential efficiency. If you're not 10 times better one tenth the cost, nobody cares. And we're at this point where our existing software vendors have made life so expensive to hold their stock price. So this reset is a good thing in general, because it's going to lower the cost of technology for customers. It's a bad thing for stock investors, because we're going to see valuations at the top plummet if you're not one of the winners."

More:

What are the security concerns with DeepSeek?

Mehta said there are concerns about prompt injection and jailbreaking DeepSeek. "AI security is one of the biggest topics for CxOs," said Mehta. "If you don't know how the model has been trained, what data has been used, and how easy or difficult it's going to be to actually break it, do you really want to use that model for your most sensitive data and use cases? Are you really going to do that?"

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Transformative Power of AI Agents in the Enterprise | With Workday CTO Jim Stratton

Transformative Power of AI Agents in the Enterprise | With Workday CTO Jim Stratton

Don't miss another #Davos2025 conversation, this time between R "Ray" Wang and Workday CTO Jim Stratton. They discuss role-based #AI agents becoming full-fledged members of the #digital workforce and driving real ROI 📈 for Workday customers by automating #business workflows in HR, finance, and procurement.

Statton emphasizes the importance of balancing human and machine decision-making -- agents handling repetitive tasks so employees can focus on higher-level, strategic work. Both parties agree that companies must evolve workforce management practices to govern this new digital workforce effectively.

Watch the full conversation and let us know your thoughts on the future of AI agents! #WEF25

On <iframe width="560" height="315" src="https://www.youtube.com/embed/m-dlLjx2Nqc?si=_aizxWC50L-Or9Z7" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>