Results

Moderna uses OpenAI's ChatGPT Enterprise to scale 750 GPTs

Moderna said it is using OpenAI's ChatGPT Enterprise to scale custom models  across its business.

According to Moderna, the company launched its own instance of ChatGPT called mChat built on OpenAI's API. Moderna said it had 80% internal adoption initially and then it deployed ChatGPT Enterprise with analytics, image generation and GPTs.

Moderna said that it has deployed more than 750 GPTs across the company and multiple functions including legal, research, manufacturing and commercial. These assistants augment employees and offer personalized support.

One example of these GPTs is Dose ID GPT, which uses ChatGPT Enterprise to evaluate the optimal vaccine dose. Dose ID provides rationale, references to sources and generates charts of key findings.

The Moderna example is an illustration of how OpenAI is scaling its enterprise efforts beyond the reach it has with Microsoft. For instance, OpenAI's Moderna case study quotes Brice Challamel, Head of AI Products and Platforms at Moderna. saying that Moderna evaluated mChat, Copilot and ChatGPT Enterprise before making a decision.

Other takeaways on the Moderna-OpenAI partnership:

  • Moderna's 750 GPTs took about two months to create.
  • Each user has 120 ChatGPT Enterprise conversations per week on average.
  • 40% of weekly active users created GPTs.
  • The legal team has 100% adoption of ChatGPT Enterprise.
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nvidia acquires Run.ai for GPU workload orchestration

Nvidia said it has acquired Run.ai, a startup focused on GPU workload management and orchestration.

Run.ai's platform is Kubernetes-based and will help Nvidia customers distribute workloads across cloud, edge and data center infrastructure.

Terms of the deal weren't disclosed, but CTech put the purchase at about $700 million.

Nvidia said Run.ai's open platform will serve as an orchestration layer for GPU clusters. AI deployments will need the ability to orchestrate and optimize generative AI training and inference for speed and cost.

Run.ai's platform provides a centralized interface to manage compute infrastructure, access to cluster resources and the ability to pool resources.

According to Nvidia, Run.ai's products will keep their current business model for the immediate future. Run.ai's product roadmap will be integrated into Nvidia DGX Cloud. Nvidia DGX and DGX Cloud customers will get access to Run.ai for AI workloads and large language model deployments.

Run.ai is already integrated with much of Nvidia's software including Nvidia AI Enterprise and DGX.

 

Data to Decisions nvidia Chief Information Officer

Snowflake launches Arctic LLM, to release details under Apache 2.0 license

Snowflake launched Arctic, an open-source large language model (LLM) optimized for enterprise workloads and efficiency. The move highlights how data platforms are increasingly launching LLMs to combine with their data platforms.

For Snowflake, Arctic is also among the first launches under new CEO Sridhar Ramaswamy's tenure. Arctic will be part of a larger LLM family built by Snowflake.

The Arctic launch also lands as Databricks launched DBRX, an LLM that has been well-received. Ramaswamy said Arctic represents a "watershed moment for Snowflake" and highlights what open-source AI can do. For good measure, Meta launched its Llama 3 LLM last week. Ramaswamy's mission is to speed up Snowflake's product cycles and innovation.

Snowflake said it will release Arctic's weights under an Apache 2.0 license and detail how the LLM was trained. Snowflake pitching Arctic as an LLM that can balance intelligence and compute. Snowflake's plan is clear: Scale Arctic usage to the 9,400 companies on its data platform. These companies will then consume more of Snowflake's platform.

Key points to note about Snowflake Arctic:

  • Snowflake is ensuring it has open-source credibility. Snowflake said it will provide code templates, flexible inference and training options and the ability to customize Arctic via multiple frameworks.
  • Frameworks for Arctic will include Nvidia NIM with Nvidia TensorRT-LLM, vLLM, and Hugging Face.
  • Arctic will be available for serverless inference in Snowflake Cortex, which offers machine learning and AI in the Data Cloud along with model gardens and catalogs from Nvidia, Hugging Face, Lamini, Microsoft Azure and Together.
  • Snowflake said Arctic's mixture of experts (MoE) architecture is designed to activate 17 out of 480 billion parameters at a time for token efficiency. Snowflake claims it activates roughly 50% fewer parameters than DBRX.

Although Snowflake launched Arctic, the company said it will still give access to multiple LLMs in its Data Cloud.

Constellation Research's take

Constellation Research analyst Doug Henschen said:

"It's good to see Snowflake moving quickly, under new CEO Sridhar Ramaswamy, to catch up in the GenAI race. Snowflake rival Databricks started introducing LLMs last year with its release of Dolly and it followed up this March with the release of DBRX, an open-source model that seems to be getting lots of traction. Snowflake is clearly responding to the competitive threat, given the press release’s comparisons between the new Arctic LLM and DBRX.  I’d like to know more about the breadth of intended use cases. Snowflake says Arctic outperforms DBRX, Llama 2 70B and Mixtral-8x7B on coding and SQL generation while providing “leading performance” on general language understanding. I’d like to see independent tests, but the breadth of customer adoption will be the ultimate gauge of success. It’s important to note that Snowflake Cortex, the vendor’s platform for AI, ML and GenAI development and development, is still in preview at this point. As a customer I would want to look beyond the performance claims and know more about vendor indemnification and risks when using LLMs in conjunction with RAG techniques." 

Constellation Research analyst Andy Thurai said:

The massive war between open-source and closed-source LLMs is heating up with multiple competitors. Massive LLMs are available for free and allowing enterprise users to fine-tune their models with their enterprise data. On that note, a few items from this release are notable:

  • This is licensed under Apache 2.0 which permits ungated personal, research, and commercial use. This is different than many other open-source LLM providers such as Meta’s Llama series, which allows use for personal and research purposes with limitations on commercial use.
  • DataBricks, which is gaining market share and momentum fast, had a massive leg up with their acquisition of MosiacML in knowledge, skilled resources, stockpile of GPUs, and expertise to train massive LLMs. Every big cloud and data vendor is going after this market by announcing their own variations including Google, AWS, Databricks, Microsoft, IBM, DataBricks, Anthropic, Cohere, Salesforce, Twitter/X, and now Snowflake.
  • Snowflake aims to make this process easier by providing code templates and flexible inference and training options to deploy in existing AI, and ML frameworks.
  • Snowflake provides options for serverless inference which could help with expanding massively distributed inference networks to operate on demand.
  • The company is trying to go after two specific markets--search and code generation.
  • Snowflake has an advantage over other LLM providers from a data lake standpoint. If Snowflake can convince the users to keep the data in their data lakes to train their custom models, or have them fine tune or RAG it, it can compete easily. Given Snowflake is late as many enterprises are already experimenting with many LLM providers. By providing hosting options for many other open source LLMs in Cortex and Arctic, Snowflake is hoping to catch up.
Data to Decisions snowflake Chief Information Officer

Microsoft expands at Coca-Cola as part of multi-cloud strategy that includes AWS

Microsoft and The Coca-Cola Co. announced a five-year $1.1 billion deal that includes Azure OpenAI, Microsoft 365 and apps including Power BI and Dynamics 365 as the beverage company rounds out its multi-cloud strategy.

The announcement is part of an emerging multi-cloud strategy at Coca-Cola. Amazon Web Services is also a provider at the company and its various partners and units. Coca-Cola is sprawling and has a bevy of publicly traded bottlers in its orbit (Coca-Cola European Partners, Coca-Cola Femsa and Coca-Cola Bottling Co.). In other words, Coca-Cola likely has more than one technology vendor in every category.

Coca-Cola's contract with Amazon is a global one that includes AWS, Prime and Amazon Ads, sources say. The Coca-Cola Co. has been an AWS customer since 2013 and continues to use multiple services.

Under the Microsoft partnership, Coca-Cola has made a $1.1 billion commitment that includes the following:

  • Experiments with Azure OpenAI Services for generative AI use cases and testing for Copilot for Microsoft 365. Microsoft said Coca-Cola has been using Azure OpenAI Service for a year.
  • Migration of applications to Microsoft Azure.
  • Use of multiple Microsoft cloud applications and platforms.

Coca-Cola's initial partnership with Microsoft was worth $250 million in 2020.

Coca-Cola doesn't disclose its annual technology budget in its regulatory filings or the breakdown of its cloud spending. However, companies are starting to disclose the splits between cloud vendors. For instance, Equifax recently disclosed how it has split its spending between Google Cloud and AWS in recent years. We detailed the breakdown in our Equifax customer story and how it is approaching AI and data products.

Tech Optimization Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

CR CX Convos: Is Customer Success Successful?

Customer success can be a valuable, meaningful and profitable endeavor for technology buyers and vendors. So why does customer success seem to miss the mark so often? Is it a misalignment of goals? Teams coming into the process too late? Perhaps a combination of a multitude of factors? When Liz Miller first wrote about customer success in a blog post, she didn't think she'd hit a nerve...but she did. So, the conversation continues! In this CR CX Convo, Liz dives back into the Customer Success conversation and shares a use case story of a complex brand that leveraged their Field Service Management partner, IFS, to define value and turn customer success into a means to rapid scale on demand which turned into even  greater value on demand.

 

On cx_convos <iframe width="560" height="315" src="https://www.youtube.com/embed/tXDU9aaZMLI?si=N4eFDmPWomur6__w" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Analytical Data Platforms 101: Data Lakes, Data Warehouses and 'Lakehouses' Explained

Constellation Research explores how analytical data platforms are evolving and what to expect in a modern platform.

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/Sdc8idvK9ds?si=xpTdrQTLIlLT3o4G" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Amazon Bedrock gets custom model import, evaluation tools, new Titan models

Enterprises will be able to import their own large language models (LLMs) into Amazon Bedrock and evaluate models based on use cases, said Amazon Web Services. AWS also launched two Amazon Titan models.

The ability to import custom models and evaluate them plays into the broader themes of generative AI choice and orchestration. AWS' Bedrock bet is that enterprises will use multiple models and need a neutral platform to orchestrate them.

In addition, enterprises are showing broad interest in using open-source models and then customizing with their own data.

According to AWS, Amazon Bedrock Custom Model Import will give companies the ability to import and access their custom models via API in Bedrock. These custom models could be added to the various model choices in Bedrock.

AWS' Matt Wood on model choice, orchestration, Q and evaluating LLMs

Enterprises will be able to add models to Amazon Bedrock that were customized in Amazon Sagemaker or a third-party tool provider with automated validation. AWS said imported custom models and Amazon Bedrock models will use the same API. Custom Model Import is in private preview and supports Flan-T5, Llama and Mistral open model architectures with more planned.

AWS also said Model Evaluation in Amazon Bedrock is now available. Model Evaluation will analyze and compare models for various use cases, workflows and other evaluation criteria. Bedrock is also getting Guardrails for Bedrock so enterprises can have control over model responses.

As for the Titan models, AWS said Amazon Titan Image Generator, which has invisible watermarking, and the latest Amazon Titan Text Embeddings are generally available exclusive to Bedrock. Amazon Titan Text Embeddings V2 is optimized for Retrieval Augmented Generation (RAG) use cases.

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Google Unveils Ambitious AI-Driven Security Strategy at Google Cloud Next'24

In eight years, from a few hundred attendees at a nondescript location in San Francisco to 30,000+ attendees in one of the largest convention centers in Las Vegas, with a private concert in a football stadium, Google Cloud Next conference has come a long way. And so has Google Cloud. Google Cloud grew its revenue 5x in the last five years.
 
As conference attendees patiently waited in a long line at an overpriced Starbucks to get their caffeine fix on their way to a keynote, and then rushing between keynotes, sessions, and the expo, with a pit stop at a crummy food court before retiring to their hotel rooms, you could not go even a minute without hearing the word “AI.” The energy was evident and the enthusiasm palpable. In one of the most striking moments, during a developer keynote, one of the Google Cloud speakers referred to 2023 as “legacy.” That’s how much Google Cloud has changed. 

Standing tall on a keynote stage in front of thousands of attendees, Thomas Kurian, the CEO of Google Cloud, announced their latest AI model, Gemini 1.5 Pro. A much taller screen behind him flashed a slide in large Google Sans font. An audible gasp from the audience followed. This AI model has a context window of 1 million tokens, the largest in the industry, which translates to analyzing data of various modality such as 1 hour of video, 30k lines of code, 11 hours of audio, and 700k words. Gemini now powers various Google Cloud’s offerings including their security portfolio.

Source: Google Cloud

Google Cloud's Differentiation Strategy: AI and Security Take Center Stage

When questioned about competition in a Q&A session with analysts, in his measured and deliberate answer, Kurian highlighted AI and security as Google Cloud's key differentiators. He also boasted Google Cloud’s open approach of giving options to customers at each layer whether they prefer first party, third party, or open-source AI models. 

 
AI and ML are a natural fit for security, as they can help process billions of signals to identify attack patterns, profile adversaries, and respond to threats in real-time. Generative AI copilots have become increasingly prevalent in enterprise software, providing a conversational interface for end users to analyze extensive data and gain insights through simple English queries. Google Cloud has also embraced this trend, announcing Gemini to integrate AI capabilities across its product portfolio, including security offerings.

The Evolution of Google Cloud's Security Strategy

Google Cloud's security narrative traces back to Google's early days as a native cloud company, where scaling infrastructure to support billions of users securely was paramount. Google developed its security stack from hardware to applications, encompassing nearly every system layer. Google then brought this native security to Google Cloud to differentiate and compete against other public cloud providers, Amazon and Microsoft. It also paved the way for externalizing internal services as products.
 
In 2017, Google introduced security services like Identity Aware Proxy (IAP) and Data Loss Prevention (DLP), but they gained little traction. The 2018 announcement of Security Command Center (SCC) in beta marked a turning point, serving as a hub for protecting Google Cloud assets and integrating ISV partner solutions. While Google Cloud wasn't yet seen as a security solutions company, this marked the beginning of its SecOps journey.
 
In 2019, Google integrated Chronicle (with Backstory and VirusTotal) from Alphabet's X moonshot factory into Google Cloud, followed by the 2022 acquisition of Mandiant. These additions expanded Google Cloud's security portfolio but introduced disparate products with different purposes, target customers, and underlying cloud platforms.
Since then, Google Cloud has focused on unifying the experiences, expanding Chronicle into a SecOps platform with SIEM and SOAR functionality, and competing as a security vendor beyond just a cloud provider. The announcement of Security Command Enterprise at Next'24 marked a significant milestone, converging cloud security with enterprise security and supporting AWS and Azure on the same "security fabric" as Google's SecOps functionality. Additionally, Google consolidated all security teams under a single organization with a general manager overseeing security as a business beyond Google Cloud, making Next'24 a coming-out party for Google Cloud's security offerings.

Unifying Threat Intelligence and Security Operations

To better understand cybersecurity, let's draw a parallel with investment banking. Investment banks have equity research teams that analyze stocks and provide recommendations, while sales and trading teams execute transactions based on this research. They collaborate, sharing real-time market insights to inform trading strategies.
 
Similarly, in cybersecurity, threat intelligence analyzes adversaries, incidents, and malware by scanning vast data volumes. This intelligence is crucial for operations teams managing digital asset security. However, these teams often lack shared tools and unified underlying information.
 
Google Cloud aims to change this with a new risk management approach, unifying CNAPP and TDIR (threat intelligence, detection, and response) on a single "security fabric" platform. By combining multiple categories, Google is attempting to define a new category on its own, a feat no other security vendor has successfully achieved. While categories can be limiting and primarily benefit vendors, Google does have an opportunity to differentiate, but it’s not going to be easy.

Source: Google Cloud

Securing Chrome as an Endpoint

As customers access applications through web browsers, Google recognizes the importance of securing Chrome as an endpoint to provide robust protection. With Chrome's widespread adoption in the enterprise sector, customers seek enhanced security features without requiring a separate browser. By treating Chrome as a secured endpoint, Google can significantly enhance endpoint security, even if operating system vulnerabilities remain. This approach also opens up a new cybersecurity category for Google to explore.
 
At Next'24, Google announced the general availability of Chrome Enterprise Premium, featuring Data Loss Prevention (DLP) and context-aware proxy capabilities. Notably, Google Cloud introduced Identity Aware Proxy (IAP) and DLP in 2017. While many customers have initiated their Secure Access Service Edge (SASE) journey, most won't complete it in the near future. Instead, they're seeking incremental solutions like proxies and DLP to supplement their existing tools. Google Cloud can capitalize on this demand with its offerings and those from its ecosystem partners, providing a comprehensive security suite.

Balancing AI Innovation with Core Cloud Security Investments

Google has acknowledged that certain areas, such as NGFW (Next-Generation Firewall), require significant investment without clear differentiation, and has opted to partner with Palo Alto Networks instead of building its own software firewall. This approach allows Google to serve its customers while avoiding unnecessary development efforts.
 
Humans are the weakest link in security. Google's previous lack of focus on Identity and Access Management (IAM) has been surprising, given its critical role in cybersecurity. However, Google is now addressing this underinvested area, introducing Privileged Access Manager in preview at Next'24. Leveraging AI, Google has the opportunity to revolutionize identity management and automate manual workflows, reducing the risk of severe breaches often caused by human error.
 
In addition to IAM, Google is investing in other critical security areas, such as Cloud Armor for DDoS protection and Autokey for customer-managed encryption keys. It's crucial for Google to continue differentiating its core cloud security offerings, ensuring the core platform receives the necessary investment, rather than disproportionately focusing on AI.

Challenges ahead 

Google Cloud has laid a strong foundation with its security portfolio, and its intentions are commendable. Nevertheless, there are challenges that need to be addressed to fully realize its vision.

Crafting a Unified and Compelling Security Story

Google Cloud's security portfolio messaging is overly product-focused, rather than solution-centric. Customers are looking for solutions; they don’t want to solve a puzzle of mapping problems to products. On top of that, by combining CNAPP and TDIR, Google is creating a unique category, but this complexity can make it difficult for customers to understand how Google Cloud security offerings align with their specific needs, especially if they don't primarily use Google Cloud. This is particularly challenging for organizations that don't consider Google a security vendor, despite Google's multi-cloud security offerings. 
 
Google will have a decision to make: Do they see their security offerings as a differentiation for customers to consider Google Cloud or do they see their security offerings to differentiate on their own merits with or without the underlying cloud platform they run on. Ideally, Google can achieve both—being the most secure cloud and a highly differentiated multi-cloud security vendor—but Google needs to articulate a unified and a compelling story that supports their vision.

Hybrid Environments and The IoT and OT Security Gap

Despite growing adoption of public cloud, 80% of IT workloads remain on-premise. While public cloud providers anticipate customers will modernize and migrate their on-premise workloads, the reality is that the majority of legacy workloads will remain on-premise in the near future. Moreover, the vast array of connected Operational Technology (OT) and Internet of Things (IoT) devices, running various legacy and non-standard operating systems, poses significant security challenges. These systems are often exploited by attackers, who leverage them as gateways to propagate malware to other systems.
 
As a native cloud company, Google is not well-positioned to secure these on-premise assets, nor should it attempt to build on-premise solutions. Instead, Google needs to foster strong partnerships to address this critical gap. Most organizations will operate in a hybrid environment for the foreseeable future, and industries with a high concentration of vulnerable OT and IoT devices—such as healthcare, retail, oil and gas, manufacturing, mining, and utilities—require comprehensive security solutions. Google Cloud must develop a compelling and comprehensive strategy to support these organizations and ensure the security of their on-premise workloads and devices.

Balancing Competition and Partnership, a Delicate Dance

Google is navigating a delicate balance with its security partners, many of whom have made significant investments in Google Cloud to serve their shared customers. While Google will inevitably compete with some of these partners, not just on Google Cloud but across public cloud platforms, it has committed to protecting their interests and has contractual obligations in place. These partners generate substantial revenue for Google Cloud, making it crucial for Google to tread carefully and avoid disrupting this revenue stream.
 
To strike the right balance, Google must compete and differentiate its offerings with clarity and integrity, driven by a genuine vision to innovate and improve products, rather than simply seeking to boost growth. By doing so, Google can maintain trust and collaboration with its partners while advancing its own goals.

Recommendations for customers

Assess and Align Your Security Tools Landscape

As a Google Cloud prospect or customer, take a comprehensive inventory of your current security tools landscape, encompassing Google Cloud and its partner ecosystem. Engage with Google Cloud and security tool vendors to discuss their roadmaps for Google Cloud, with a specific focus on how they plan to leverage AI to address your unique requirements. Additionally, consider exploring tools that offer multi-cloud support, regardless of your primary cloud provider, to future proof your security infrastructure.

Go Beyond Categories to Solve Specific Problems

Instead of shopping by category, focus on solving specific problems. Shift your mindset from being a vendor manager to a problem-solver, targeting the outcomes you want to achieve. Remember, your unique starting point and system maturity may differ from others, so avoid a one-size-fits-all approach. Tailor your solutions to your distinct needs and goals.

Advocate for Your Needs and Shape the Security Conversation

Clearly articulate and communicate your expectations to vendors, as security is a complex and vast domain. Recognize that vendors prioritize their product roadmaps, just as you prioritize your needs. Engage in open discussions with fellow security and technology leaders to share strategies and learn from their experiences. Be an active and vocal participant in the community, shaping the conversation and influencing the solutions that meet your needs.

Stay Ahead of Cybersecurity Threats with Expert Trends Report

As you craft your security strategy and execution plan, check out our "11 Top Cybersecurity Trends of 2024 and Beyond." (If you're a vendor and don't have access to the report please contact me for a courtesy copy.) Drawing insights from numerous conversations with security, technology, and business leaders as well as extensive market research, this cybersecurity trends report offers a holistic view into the broader cybersecurity landscape. It also offers tangible recommendations for CxOs who are frantically navigating the cybersecurity maze to design and operationalize their cybersecurity strategy, with the objective to improve their defenses against increasingly sophisticated attacks.

Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief Technology Officer

SAP's Q1: Net loss on restructuring, cloud revenue growth of 24%

SAP reported a first quarter loss due to a restructuring charge but said its cloud revenue was up 24% with cloud ERP revenue growth of 32%. SAP said its cloud backlog was €14.2 billion, up 27% from a year ago.

The company's first quarter results come amid a restructuring effort that cut 8,000 jobs. SAP is also moving customers to S4/HANA through its SAP RISE program and looking to layer in business intelligence throughout its applications via generative AI.

SAP reported first quarter revenue of €8.04 billion, up 8% from a year ago with a net loss of €824 million. That sum included a €2.2 billion restructuring charge. Adjusted earnings were €944 million, down from €1.01 billion a year ago.

CEO Christian Klein said SAP was "off to a great start in 2024 and we’re confident we’ll achieve our goals for the year." SAP reiterated its outlook for the year. Klein said that Business AI, cross-selling and winning midmarket customers would drive growth.

Dominik Asam, SAP CFO, said the company's restructuring effort is designed to allow the company "to focus our investments on the Business AI opportunity while decoupling expenses from revenue growth."

SAP’s quarterly report lands as Constellation Research’s BT150 CXOs have gripes about SAP’s RISE program. Some of the feedback:

  • One CIO asked the group for opinions on SAP's RISE program and being forced from on-premises to the cloud. The goal was to have a strategy for SAP in place by the end of the year.
  • CXOs weren't thrilled about SAP RISE and items like licensing credits for legacy environments. A CIO wondered what would prevent a customer from moving away from SAP--especially since the enterprise operates in a space that doesn't garner investment from the enterprise software giant.
  • SAP's RISE program is viewed as an exercise in financial engineering more than something that benefits customers.

Our BT150 CXOs aren't alone. SAP's German speaking user group takes aim at cloud contracts, BTP and more | SAP user group DSAG rips S/4HANA innovation plans, maintenance increases | SAP retools for generative AI, cuts 8,000 jobs, sets 2024, 2025 ambition

The company said SAP RISE has potential to convert customers to the cloud and enable SAP to increase wallet share.

The company is also betting on its Joule copilot experience to drive demand in the future, but first it needs to convert customers to S4/HANA.

Key items in the quarter include:

  • Software license revenue fell 26% in the quarter.
  • Service revenue of €1.08 billion was flat from a year ago.
  • SAP expects to exit 2024 with a headcount total on par with 2023. The company eliminated 8,000 positions but plans to fill new ones focused on future business needs via new hiring and internal reskilling.
  • SAP’s restructuring depends on the uptake of voluntary leave programs, but the company doesn’t have visibility into its German workforce.
  • SAP reiterated its previous 2024 outlook of €17.0 billion to €17.3 billion cloud revenue at constant currencies, up 24% to 27%. Full year cloud and software revenue will be €29 billion to €29.5 billion at constant currencies, up 8% to 10% at constant currencies.
  • The company is expecting a Net Promoter Score of 9 to 13 for 2024. Scores above 50 are excellent with scores above 80 considered world class. 

Klein on RISE progress

On a conference call with analysts, Klein said there's a lot of runway with RISE with SAP and said companies will need to upgrade to transform. SAP is also offering incentives to migrate to S/4HANA across its portfolio. He said:

"Our installed base is large with over EUR11 billion remaining support revenue to be converted to the cloud. Typically, by a factor of around two to two. On top, the EUR700 billion Cloud ERP market offers significant cross selling opportunities, and I have no doubt that SAP's integrated best of suite capabilities will win in the core business of our customers. As part of RISE and via the clean core journey, SAP and our ecosystem will help our customers to remove the ERP custom code and instead develop integrated ERP extensions on BTP. This gives us an immense additional revenue potential considering that customers in the on premise world spend up to EUR7 on custom code for every euro they invest in ERP software."

He also said GROW with SAP as potential for smaller enteprises.

"As SAP's greenfield cloud ERP offering for net new customers or new business units of large enterprises, GROW delivers go lives in weeks for every business model in every industry in every country. With our ERP solution, SME customers can grow and scale their business without migrating to a new ERP. Ultimately, RISE and GROW offer customers similar advantages, innovation, modularity, scalability, and integration."

SAP Business AI is another growth driver for the company and Klein said the plan is to infuse Joule and Business AI across te portfolio. "Joule will be our new user experience via natural language, our one front end. We have based our Joule roadmap on an analysis of the most frequent business and analytical transactions of our end users. This way, we make sure that the most heavily used transactions will be fully AI enabled by the end of this year," said Klein. 

He added that SAP is embedding genAI in its cloud portfolio and has released more than 30 new AI scenarios across the cloud portfolio and has more than 100 in the pipeline. 

Klein was asked about SAP customers migrating to S/4HANA. The CEO said that SAP is taking a modular approach with megadeals for faster time to value. Custom code has hampered migrations. 

"When customers decide to move to RISE, they're not just doing a move of their current environments and replicating the same capability. In fact, far from it. They're trying to transform, operationally process all of their data, all of their capability to serve now and into the future," said Klein. "They're setting their business up. And so when they look at these, which is why, to answer your question, the larger deals are because they look at a multi-year roadmap of capability transitioning from an older state, including non-SAP."

Constellation Research's take

Constellation Research analyst Holger Mueller said:
​
"SAP has its work cut out this financial year. It needs to get going on its new go-to-market setup for S/4 HANA upgrades, aka the Thomas Saueressig organization. It is shuffling 10%+ of its employee base (8,000 as part of the reshuffle and then attrition), make S/4 HANA more interesting, show value of AI, and grow. These are all necessary steps for Christian Klein and team – but a lot of things to master at the same time. All eyes will be on the 2027 ECC upgrade deadline, that at this point is no longer plausible for large scale SAP customers. Will these core SAP customers (many in Germany) move because of the AI promise anyway – or will they wait and 2029 or even 2030, which will be the new deadline. Time will tell."

 

Data to Decisions Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work SAP Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

How Verizon buckets its AI, genAI use cases

Verizon CEO Hans Vestberg is splitting the telecom giant's AI use cases in three buckets: Optimizing processes, product experiences and revenue growth.

Vestberg's thinking on AI, including generative AI, highlights how CEOs are maturing in their generative AI approaches. First, there was a wave of demand for shiny new objects. Then there was the wave of efficiency for AI use cases. And revenue growth is percolating as a goal but is far less developed so far.

Related: Middle managers and genAI | Why you'll need a chief AI officer | Enterprise generative AI use cases, applications about to surge | CEOs aim genAI at efficiency, automation, says Fortune/Deloitte survey | 77% of CxOs see competitive advantage from AI, says survey | Google Cloud Next: The role of genAI agents, enterprise use cases

Here's what Vestberg said on Verizon's first quarter earnings call:

"Our AI strategy focuses on three priorities. First, optimizing internal processes and operations through machine learning, such as creating efficiencies in fuel consumption. AI is already centered to our cost transformation program and will become even more important over time. Secondly, enhancing product experiences with AI capabilities like the personalized plan recommendation on myPlan, which is producing good early results. And thirdly, establishing an AI-based revenue stream by commercializing our network's unique low latency, high bandwidth, and robust mobile edge compute capabilities. Generative AI workloads represent a great long-term opportunity for us. As we expand our network and increase our performance advantage, we're also making Verizon a more efficient organization."

If you unpack those comments, Verizon is seeing tangible results across the spectrum. "We already had several generative AI projects going live," he said.

  • Vestberg said Verizon has outsourced much of its customer service infrastructure without interruption, but also sees opportunities for AI to improve service.
  • On the experience side, Vestberg said Verizon's myPlan effort, which offers consumers more customization options, is personalized with the help of AI.
  • For efficiency results, AI has been in the mix for driving network performance but is used now for capacity deployment and power consumption. "We are using AI and generative AI already now commercially. So this is not the playing ground for us. We just see more opportunities," said Vestberg.

Naturally, Verizon also sees edge networks as well as private networks being a big driver of AI workloads. Vestberg said:

"On the flip side, of course we also see revenues. Our network was built for AI. That was my thought when I built Verizon Intelligent Edge Network five years ago or six years ago, that we're going to have compute and storage at the Edge. AI is sort of built for that with the low latency we have on the 5G network. And as we are deploying our 5G right now, with the mobile edge compute and AI, this is a great long-term opportunity for us using AI."

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity B2C CX AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer