Results

SAP ups 2024 outlook as Q3 better than expected

SAP raised its cloud and software outlook for fiscal 2024 as the company's backlog continued to surge.

The company projected 2024 cloud and software revenue of €29.5 billion to €29.8 billion, up from the €29 billion to €29.5 billion previously projected. The company also said its free cash flow will be €3.5 billion to €4 billion.

SAP held its cloud 2024 revenue projection steady at €17.0 billion to €17.3 billion.

In the third quarter, SAP reported earnings of €1.44 billion, or €1.25 a share, on revenue of €8.47 billion, up 9% from a year ago. Cloud revenue was €4.35 billion, up 25% from a year ago. Cloud ERP revenue in the third quarter was €3.64 billion, up 34% from a year ago.

Wall Street was expecting SAP to report third quarter earnings of €1.21 a share on revenue of €8.45 billion.

Christian Klein, CEO of SAP, said the third quarter showed strength for cloud ERP and "a significant part of our cloud deals in Q3 included AI use cases."

Speaking on SAP's earnings conference call, Klein talked up Joule and said it has the best chance to be a premier AI agent. 

"While many in the software industry talk about AI agents these days, I can assure you, Joule will be the champion of them all. So far, we have added over 500 skills to Joule and we are well on track to cover 80% of the most frequent business and analytical transactions by the end of this year. And in Q3 alone, several hundred customers licensed Joule."

Klein said that Joule's power will be the ability to perform tasks across finance, HR, sales, supply chain and other functions. "Joule will soon be able to orchestrate several AI agents to carry out complex processes end-to-end," he said. 

SAP CFO Dominik Asam said the company is seeing efficiency gains from its restructuring in 2024.

By the numbers:

  • SAP's cloud backlog was up 25% in the third quarter compared to a year ago and the acquisition of WalkMe contributed 1% to that growth rate.
  • Software licenses revenue in the third quarter fell 15% from a year ago.
  • Restructuring expenses for the first nine months of 2024 were €2.8 billion.
  • By region, SAP said it saw cloud revenue strength in Asia Pacific and Japan and EMEA. Americas growth was "robust."

Key points from SAP's earnings conference call include:

  • Klein said about 30% of SAP's cloud orders included AI use cases. 
  • SAP cited numerous RISE with SAP wins including grocers Schwarz Group and Sainsbury as well as Nvidia, which implemented RISE with SAP in 6 months, and Mercado Libre. 
  • "Our investment in Business AI are also starting to show positive results, creating new opportunities and deepening customer engagement. Now with the added capabilities of WalkMe, we are able to further improve work flow execution and user experience," said Asam.
  • Klein said that SAP's move to centralize its cloud operations is paying dividends. "We are rolling out the cloud version of HANA, much more scale, better TCO, better resiliency. And of course, we're also working with the hyperscalers. I mean, we have with RISE and on the cloud infrastructure, we have really some really strong measures we are driving to further optimize not only performance, but again, also the scalability of HANA Cloud running on the hyperscaler infrastructure," he said. 

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"SAP had a good quarter, as expected. AI is the break Christian Klein and team as AI needs to live in the cloud, and that forces before skeptical CxOs to bite the bullet and move to S/4 HANA. SAP keeps struggling with the value for SAP Grow and Rise – as only 1/3 of cloud revenue comes from these initiatives, but this does not matter anymore as AI is the pull. With favorable announcements from the recent SAP TechED conferencebto help customers with the ABAP code assets as well as the announcement of an SAP DataLake, SAP is helping its existing customers more and better than before. All of this leads to a key milestone for the vendor: Cloud revenue for the first time is over 50% of SAP revenue. What is remarkable is that SAP is more profitable. Traditionally, the (now shrinking) perpetual license revenue is more profitable than cloud revenue (where SaaS vendors pay IaaS vendors). But with SAP charging more customers directly for their IaaS costs (and then paying the AWS, Google and Microsoft etc.), it is making margin from the pass through."

 

Data to Decisions Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work SAP Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Honeywell, Google Cloud team up on industrial IoT, genAI use cases

Honeywell said it will integrate Google Cloud's AI into Honeywell Forge, an Internet of things platform designed for industrial use cases.

The two companies said they will create joint applications for industrial use cases in 2025 that combine Google Cloud's Gemini on Vertex AI with Honeywell's applications.

For Google Cloud, the Honeywell partnership highlights how it is leveraging AI, use cases and a focus on verticals to enter accounts often via generative AI. Thomas Kurian, CEO of Google Cloud, has focused Vertex AI on industry use cases, automation and process optimization. Kurian has also touted AI agents.

Specifically, Google Cloud and Honeywell will build industrial AI agents using Google Cloud Vertex AI Search and Gemini multimodal large language models. Use cases include:

  • Industrial AI agents focused on automating project design cycles and preventative maintenance.
  • Cybersecurity applications that couple Google Threat Intelligence with Honeywell's Global Analysis, Research and Defense Threat Intelligence and Secure Media Exchange.
  • Edge device AI that will put Google's Gemini Nano model on Honeywell edge devices across multiple industries. The two companies said they plan to offer a series of edge devices.

Honeywell has aligned its business around three big trends--automation, future of aviation and energy transfer. All those industries are ripe for AI optimization.

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft launches AI agents for Dynamics 365, customization via Copilot Studio

Microsoft is adding 10 autonomous agents in Dynamics 365 and moving the ability to create them into public preview in Copilot Studio.

With the move, Microsoft is adding on to its Copilot stack with agentic AI agents that can complete tasks autonomously. Microsoft sees agents as the new apps for the generative AI ecosystem. Copilots are how you'll interact with the agents that will work on behalf of an individual, team or function to execute on processes.

Agentic AI has been a recurring theme of late as enterprise software vendors see them as a way to automate work and processes. The agentic AI theme reached a crescendo with Salesforce's Agentforce debut, but was picking up for months before that.

Microsoft argued that agents will easily outnumber employees and be effective on its platform because they can tap into work data in the Microsoft 365 Graph and add context via its platform and systems of record.

Microsoft's plan for AI agents is to deploy in common processes starting with enterprise resource planning. The bet by Microsoft is that it can take its Copilot and agent infrastructure and log time sheets, close books, prep ledgers and file expense reports autonomously. Over time, Microsoft's is arguing that Copilots will effectively become the user interface of enterprise software and various silos. AI agents will automate and execute business processes and be triggered by Copilots. There will be as many agents as there are business processes.

Early adopter customers included Clifford Chance, McKinsey & Company and Pets at Home. Microsoft said that it will release pricing details as its AI agents near general availability. Software vendors have been examining new pricing models for agents given that per-seat plans don't work well.

The new autonomous agents Dynamics 365 cover sales, service, finance and supply chain categories to start. Microsoft said the plan is to create more agents throughout the year. "Our goal is to drive more value for our customers across their biggest areas of pain in their processes," said Stephanie Dart, Senior Director of Product Marketing for Microsoft Dynamics 365.

Microsoft's first batch of agents include:

  • Sales Qualification Agent, which will research leads, prioritize opportunities and guide outreach with personalized emails and responses.
  • Supplier Communications Agent, which tracks supplier performance, detects delays and responds to free up procurement teams.

  • Customer Intent and Customer Knowledge Management Agents, which will help call centers with high-volume requests and talent shortages to resolve problems autonomously.
  • Sales Order Agent for Dynamics 365 Business Central will automate the order intake process from entry to confirmation.
  • Financial Reconciliation Agent for Copilot for Finance aims to reduce the time spent closing the books.
  • Account Reconciliation Agent for Dynamics 365 Finance is designed for accounts and controllers and automates the matching and clearing of transactions.
  • Time and Expense Agent for Dynamics 365 Project Operations manages time entry, expense tracking and approval workflows autonomously.
  • Case Management Agent for Dynamics 365 Customer Service automates the creation of a case, resolution, follow up and closure.
  • Scheduling Operations Agent for Dynamics 365 Field Service gives dispatchers the ability to optimize schedules for technicians and accounts for changing conditions throughout the workday.

These out-of-the-box agents will be complemented by the custom ones created in Copilot Studio with guardrails, best practices and controls in place. Richard Riley, General Manager of Power Platform Marketing at Microsoft, said that Copilot Studio has the same compliance and security capabilities as the company's Power Platform, specifically Power Virtual Agents.

In addition, Microsoft's AI agents are designed with human-in-the-loop processes in mind.

Constellation Research's take

Martin Schneider, analyst at Constellation Research, said:

"The agent building tools in Copilot Studio and the 10 out-of-the-box agents are a great way for technical and non-technical users to begin exploring the use of autonomous agents inside their Dynamics environments. But I think the really interesting bit inside these announcements is the fact that Microsoft has added agentic AI to its ERP offerings.

This is a smart move for two reasons. One, the data inside ERP systems is typically more complete and of higher quality than in CRM systems (where we see the bulk of AI agents being put forth). That means the agents have more reliable and accurate insights on which to act. Second, the use cases for agentic AI inside ERP provide more immediate and measurable value. Many of the tasks agents will be performing are common, repeatable and specific - so taking them off a human’s plate drives immediate productivity. But also, by doing these tasks incredibly quickly, and at scale, larger companies can invoice and bill clients faster, close books faster, take payments, etc. This creates an immediate benevolent cycle of shortening sales and revenue collection and recognition cycles, which has the potential to increase cash flow and bottom-line metrics in significant ways."

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM rolls out Granite 3 models, makes default for Consulting Advantage

IBM launched Granite 3.0 8B and 2B models under the Apache 2.0 license, new models designed for CPU-based deployments and edge computing and the next-generation of Watsonx code assistant. In addition, IBM said Granit models will be the default for Consulting Advantage, an AI delivery platform used by the company's consultants.

Big Blue announced the latest Granite large language models (LLMs) at its TechXchange event. IBM said the Granite family of models is under the fully permissive Apache 2.0 license for enterprise use cases.

"IBM keeps advancing its Granite model family, alleviating the concerns that it was simply being tossed over to open source in move that CxOs have seen too often. IBM has the knowhow and data to maintain these models. The future use of Granite models in next-gen apps looks brighter than ever," said Constellation Research analyst Holger Mueller.

IBM's Granite 3.0 family includes the following:

  • Granite 3.0 8B-Instruct, Granite 3.0 2B-Instruct, Granite
  • 3.0 8B Base, Granite 3.0 2B Base for general purpose and language use cases.
  • Granite Guardian 3.0 8B, Granite Guardian 3.0 2B focused on guardrails and safety.
  • Granite 3.0 3B A800M Instruct, Granite 3.0 1B A400M Instruct,
  • Granite 3.0 3B A800M Base, Granite 3.0 1B A400M Base as mixture-of-experts models.

The Granite 8B and 2B models are designed to be workhorses that deliver strong performance and cost efficiency for RAG, summarization and classification. IBM expects these models to be adopted and then fine-tuned by businesses looking to avoid the costs associated with larger models. IBM discloses the data sets used to train Granite and provides IP indemnity on watsonx.ai.

IBM also released benchmarks for Granite 8B.

According to IBM, the Granite mixture-of-experts models (A800M) are designed for low-latency environments, edge use cases and CPU-based inference deployments.

As for the Granite Guardian 3.0 models, IBM said the family is designed to check user prompts and LLM responses for various risks including bias, toxicity and jailbreaking.

Going forward, IBM said it will use the Granite models and extend them with AI agent capabilities for autonomy. Granite 8B features agentic capabilities for workflows. These capabilities will be rolled out in 2025 with prebuilt agents for use cases.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity IBM AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

On-premises AI enterprise workloads? Infrastructure, budgets starting to align

On-premise enterprise AI workloads are being talked about more as technology giants are betting that enterprise demand will launch in 2025 due to data privacy, competitive advantage and budgetary concerns.

The progression of these enterprise AI on-premises deployments remains to be seen, but the building blocks are now in place.

To be sure, the generative AI buildout so far has been focused on hyperscale cloud providers and companies building large language models (LLMs). These builders, many of them valued at more than a trillion dollars, are paying another trillion-dollar giant in Nvidia. That GPU reality is a nice gig if you can get it, but HPE, Dell Technologies and even the Open Compute Project (OCP) are thinking ahead toward on-prem enterprise AI.

During HPE's AI day, CEO Antonio Neri outlined the company's market segments including hyperscalers and model builders. "The hyperscaler and model builders are training large language AI models on their own infrastructure with the most complex bespoke systems. Service providers are providing the infrastructure for AI model training or fine-tuning to customers so they can place a premium on ease and time to deployment," said Neri.

Hyperscalers and model builders are a small subset of customers, but can have more than 1 million GPUs ready, added Neri. The third segment is sovereign AI clouds to support government and private AI initiatives within distinct borders. Think of these efforts as countrywide on-prem deployments.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

The enterprise on-premises AI buildout is just starting, said Neri. Enterprises are moving from "experimentation to adoption and ramping quickly." HPE expects the enterprise addressable market is expected to grow at a 90% compound annual growth rate to represent a $42 billion opportunity over the next three years.

Neri said:

"Enterprises must maintain data governance, compliance, security, making private cloud an essential component of the hybrid IT mix. The enterprise customer AI needs are very different with a focus on driving business productivity and time to value. Enterprises put a premium on simplicity of the experience and ease of adoption. Very few enterprises will have their own large language AI models. A small number might build language AI models, but typically pick a large language model off the shelf that fits the needs and fine-tune these AI models using their unique data."

Neri added that these enterprise AI workloads are occurring on premises or in colocation facilities. HPE is targeting that market with an integrated private cloud system with Nvidia and now AMD.

HPE's Fidelma Russo, GM of Hybrid Cloud and CTO, said enterprises will look to buy AI systems that are essentially "an instance on-prem made up of carefully curated servers, networking and storage." She highlighted how HPE has brought LLMs on-premises for better accuracy and training on specific data.

These AI systems will have to look more like hyperconverged systems that are plug and play because enterprises won't have the bandwidth to run their own infrastructure and don't want to pay cloud providers so much. These systems are also likely to be liquid cooled.

Neil MacDonald, EVP and GM of HPE's server unit, outlined the enterprise on-prem AI challenges that go like this:

  • The technology stack is alien and doesn't resemble classic enterprise cloud deployments.
  • There's a learning curve on top of the infrastructure and software stack to master.
  • The connection from generative AI model to enterprise data requires business context and strategy. Enterprises will also struggle to get to all of that data.

Dell Technologies recent launches of AI Factories with Nvidia and AMD highlight how enterprise vendors are looking to provide future-proof racks that can evolve with next-generation GPUs, networking and storage. These racks obviously appeal to hyperscalers and model builders, but play a bigger role by giving enterprises faith that they aren't on a never-ending upgrade cycle.

Blackstone's data center portfolio swells to $70 billion amid big AI buildout bet

To that end, the Open Compute Project (OCP) added designs from Nvidia and various vendors to standardize AI clusters and the data centers that host them. The general idea is that these designs will cascade down to enterprises looking toward on-premises options.

George Tchaparian, CEO at OCP, said the goal of creating a standardized "multi-vendor open AI cluster supply chain" is that it "reduces the risk and costs for other market segments to follow."

Rest assured that the cloud giants will be talking about on-premises-ish deployments of their clouds. At the Google Public Sector Summit, the company spent time talking to agency leaders about being the "best on-premises cloud" for workloads that are air-gapped, separated from networks and can still run models. Oracle’s partnership with all the big cloud providers is fueled in part by being a bridge to workloads that can’t go to the public cloud.

Constellation Research analyst Holger Mueller said that he is a fan of on-premise AI deployments except with a twist--these deployments should be build on a cloud stack. He said:

"CxOs need to keep in mind that on-premises AI is pitched by vendors that have failed at providing a public cloud option - the most prominent being HPE and Dell. As there is merit for on premises - for speed, privacy and compliance - the bottleneck remains NVidia GPUs. As these are better utilized in the cloud, the cloud providers have a chance to pay more than any enterprise. And this is just compute - we have not even talked about storage / data. CxOs need to be aware of moving data and workloads every year or so - which also means extra cost, downtime and risk - something enterprises cannot afford. In short - the future of AI is in the cloud." 

The cynic in me would dismiss these on-premises AI workload mentions and think everything would go to the cloud. But there are two realities to consider that make me more upbeat about on-prem AI:

  1. Infrastructure at the data center and edge will have to move closer to the data.
  2. Accounting.

The first item is relatively obvious, but the accounting one is more important. On a Constellation Research client call about the third quarter AI budget survey, there was a good bit of talk about the stress enterprise operating expenses were seeing.

The art, ROI and FOMO of 2025 AI budget planning

Simply put, the last two years of generative AI pilots have taken budget from other projects that can't necessarily be put off much longer. Given the amount of compute, storage and cloud services required for generative AI science projects, enterprises are longing for the old capital expenditure approach.

If an enterprise purchases AI infrastructure it can depreciate those assets, smooth out expenses and create more predictable costs going forward.

The problem right now is that genAI is evolving so fast that a capital expenditure won't have a typical depreciation schedule. That's why these future-proof AI racks and integrated systems from HPE and Dell start to matter.

With AI building blocks being more standardized, enterprises will be able to have real operating expense vs. capital expense conversations. CFOs are arguing that on-prem AI is simply cheaper. To date, generative AI means that enterprises can't manage operating expenses well and budgets aren't sustainable. The bet here is that capital budgets are going to make more sense once the hyperscale giants standardize a bit.

Bottom line: AI workloads may wind up being even more hybrid than cloud computing.

Insights Archive

 

Data to Decisions Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Future of Work Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Blackstone's data center portfolio swells to $70 billion amid big AI buildout bet

Blackstone, best known as a massive asset manager with real estate, private equity and infrastructure holdings, is doubling down on the AI-fueled data center buildout and the energy that'll be needed to power those workloads.

On Blackstone's third quarter earnings conference call, the company said its data center portfolio now has $70 billion in facilities and more than $100 billion in pipeline development.

Those gaudy numbers come courtesy of the $16 billion purchase of AirTrunk, the largest data center operator in Asia Pacific.

Before the AirTrunk purchase Blackstone's data center portfolio was $55 billion with $70 billion in prospective pipeline development.

Steve Schwarzman, CEO of Blackstone, said:

“Blackstone is the largest data center provider in the world with holdings across the U.S., Europe, India, and Japan. Last month, we announced another major expansion by agreeing to acquire AirTrunk. We were uniquely positioned to execute on this investment, given our expertise in this sector, the scale of our capital, the global integration of our teams, and our connectivity to the world's largest data center customers.

Our ability to serve these customers represents a powerful illustration of how Blackstone has become a trusted solutions provider on a massive global scale to many of the largest and most valuable companies in the world."

Blackstone is doing what it does best. Follow the money. As Nvidia CEO Jensen Huang says repeatedly, more than $1 trillion will be spent on building new data centers for AI workloads. Blackstone also has invested in QTS, Coreweave and Digital Realty.

In addition, Blackstone is investing in power and utility companies that'll supply power to the data centers in its portfolio.

Blackstone's data center buildout took about three years. In the third quarter, Blackstone said its data center business "was again the single largest driver appreciation in our infrastructure real estate businesses."

Here's a look at the race to build out AI factories and energy to power them:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Public Sector 2024: US Space Force General Saltzman on innovation, scale, leadership

General Chance Saltzman, Chief of Space Operations for United States Space Force, outlined the agency's increasing challenges, but noted that the innovation at scale and pace is possible with a public-private partnership.

Speaking at the Google Public Sector Summit 2024 in Washington DC, Saltzman (right) hit on multiple themes that apply to the public and private sector, leadership and innovation within a large organization. Here's a recap of the Google Public Sector Summit:

The Space Force recently outlined its pillars for change including developing science and technology processes and making them operational at pace and scale. Saltzman's talk was a few days after Space Force tested the X-37B Orbital Test Vehicle (OTV-7).

Here's a look at the key points from Saltzman's talk:

Scale issues and tracking threats. Saltzman said that about 2008, he was concerned that the database used to track space objects would struggle at 10,000. Today, that database is handling more than 40,000 objects. US Space Force is working with the private sector including Google Public Sector to scale and track the space traffic.

"The number of satellites launched has dramatically changed since I arrived in 2008. The cost per kilo to orbit has gone from $30,000 to $1,500 and these are game changing shifts in the space domain," said Salzman.

This chart tells the story.

The US Department of Defense works with multiple cloud vendors directly under its Joint Warfighting Cloud Capabilities contract vehicle.

Space security. Saltzman also said that GPS jamming and interference with satellite communications is also becoming a security threat. The threats to space infrastructure can also hamper operations on Earth. "It used to be that all we had to do was maintain access to space and then exploit it for its advantages," said Saltzman. "Today to control the domain and protect it, we have to deny an adversary. This control aspect is what has shifted."

Saltzman said space is now contested and that requires more technological prowess. Space used to require efficient services for navigation and communication, but the domain has transformed to be contested.

Innovation within government. Saltzman said, "the government does not innovate well." He added that budget and funding for projects are set up years in advance. "We write our requirements several years before the funding lines up. That creates problems with working capital and being more agile with our resources," he said.

Budget issues aside, innovation is also a mindset. The government is good at creating what Saltzman calls "new-old." In a nutshell, new-old refers to developing new versions of old capabilities. "We take the F-16 and we build the F-22. The system is designed that way. The industrial support is available and we have our concept of operations," he said. "It's all based on standard operating procedure and it is seen as low risk to enhance what we already have."

The catch is that space requires a system that's "new-new" and breaks from tradition. "You really need to break from your own patterns. No one in government wants to do risky things with taxpayer dollars," he said. "The system is designed to give you new-old."

Leadership and innovation. Saltzman said he is aiming to develop new kinds of leaders that can be innovative. "I ask questions like 'are we following tried and true processes?'" he said. "It's a leading question because then we're probably not right. If we start building requirements on things we know we're already off base. We're already going to limit the options of what's possible."

Thinking horizontally. Saltzman said his goal is to create a rethinking of operations that' s more horizontally so that you can deliver vertically to adapt to multiple use cases.

Saltzman's take rhymes with how companies are thinking through digital transformation, AI and cloud computing. Start small, notch wins and build capabilities. "You have to test assumptions and recognize that new-old is not innovation. It is not going to get us where we need to go," he said. "We're trying to put it in terms where it doesn't sound like change is inherently risky. We are already experiencing the risk.

Public private partnership. Saltzman said the private sector is critical to public sector innovation, but it will be frustrating at times. "Curb your frustrating working with the government. At least you get to go home to a different office," he quipped. "Nobody is more frustrated sometimes with the way we do our business. But periodically, I see value in the slow delivery process. We shouldn't be entrepreneurial with taxpayer dollars. Just recognize that together we can operationalize these good ideas."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work New C-Suite Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Zuora goes private in $1.7 billion deal with Silver Lake, GIC

Zuora said it will be acquired by Silver Lake and GIC in a $1.7 billion deal that will take the company private.

Under the terms of the deal, Zuora shareholders will get $10 a share.

The company will continue to be led by founder and CEO Tien Tzuo, who added that going private will help the Zuora build out its monetization suite.

According to Zuora, the deal is expected to close in the first quarter of 2025. Tzuo will roll over the majority of his existing ownership.

Zuora recently acquired Togai and said it would offer a platform for usage-based pricing as well as subscriptions. Generative AI has led to new models for businesses that previously relied solely on subscriptions.

Enterprise software is a hotbed for private equity as New Relic, Alteryx and Smartsheet have been among the companies going private.

 

Matrix Commerce Chief Information Officer

Google Public Sector Summit: 9 takeaways you need to know

Google Public Sector Summit featured a packed lineup of AI leaders, panels on use cases and real-world government challenges.

The gist of the conference is that government generative AI customers can leverage commercial Google Cloud but be walled off. Google Public Sector is an independent entity that leverage Google Cloud technology, but takes it the last mile (with isolated instances in some cases). In an interview with analysts, Google Public Sector CEO Karen Dahut said the company's goal is to serve the public sector commercial cloud capabilities for government use. 

"When we came into this market, what we found was traditional gov clouds. They're walled off and lack parity. It lacks the compute scale and doesn't have resiliency. What if we made our commercial cloud available to government by a software defined community cloud with all of the guardrails built in? OMB came to that same conclusion independent from us."

Here's a look at all the takeaways from the conference, lessons and best practices that emerged:

If you invested in data infrastructure, architecture and governance you're able to drive value from generative AI projects. Lakshmi Raman, Director of AI at the Central Intelligence Agency (CIA), said the agency was able to drive value quickly "due to investments made in AI, data and tooling over the last decade." "That investment enabled us to evaluate generative AI capabilities quickly," said Raman.

Improving data quality may be your most important genAI use case. Dr. Ted Kaouk, Chief Data & AI Officer and Director, Division of Data CFTC, said his agency is focused on the quality of data ingestion to focus on anomaly detection and "developing prototypes to detect bad actors."

Look at your data as a product. Zach Whitman, Chief Data Scientist & Chief AI Officer at GSA, said he's been focused on using generative AI to focus on "how to enable better data productization and groundwork that maximizes value in the future."

Ron Robinette, Deputy Secretary, Innovation and Technology & AIO at CA GovOps, seconded Whitman's take. "We have five proof of concepts in state of California and we need our data better prepared to take advantage of that opportunity," he said.

Mark Munsell, Director, Data and Digital Innovation, Founder of Moonshot Labs at National Geospatial-Intelligence Agency, said Munsell is improving its data by making sure everything possible can be entered into a database so it can have structure that can later help improve model training. This structure can then be combined with computer vision.

"We have 100s of petabytes of data from sensors and traditionally humans would look at the data and find signals, but now we need computer vision and model to cover places we can't," said Munsell.

Invest in metadata. Whitman said part of that data productization effort is to invest in metadata. "Overinvest in metadata so you can make the data explainable to the AI systems," he said. "Sometimes that's hard work and it's hard to get investment, but it's worth it."

"Metadata is critical," said Gulam Shakir CTO at National Archives & Records Administration (NARA). "We are leveraging several pilots."

Generative AI is breaking down silos. Whitman noted that conversations about generative AI use cases are going well beyond technology. Use case conversations are involving risk and safety, technology and the business. "We are seeing this cross pollination of great ideas," said Whitman. "It's a game changer that breaks down silos."

AI at the edge and hybrid use cases. At the Google Public Sector Summit, the company spent a lot of time talking to agency leaders about being the "best on-premises cloud" for workloads that are air-gapped, separated from networks and can still run models.

There's a reason for AI systems designed for the field: The public sector--especially the military--often has spotty connectivity. During a panel, Jane Overslaugh Rathbun,

CIO of the US Navy, said sailors are "disconnected continuously." She added that the Navy is looking for edge AI capabilities that can process the sensor data from ships in contested theaters and get sailors the data to make decisions.

Young J. Bang, Principal Deputy Assistant, Secretary of Army Acquisition, Logistics & Tech, noted that the Army is rarely connected at the edge. Bang said a hybrid approach to genAI will emerge where models are trained centrally, fine tuned and sent to the edge.

Smaller models are seen as key. Mark James, Director of Infrastructure and Support Services at the Department of Homeland Security, said AI at the edge is going to require smaller models. ""We're exploring smaller language models to support AI at the edge," said James. For the DHS, ports are a key edge location where smaller models can have impact augmenting officers' day-to-day activities by scanning documents.

Talent. Brig. Gen. Heather W. Blackwell at the US Air Force | JFHQ-DODIN said generative AI is critical to making sure your limited talent resources are used on high-value projects. "We need AI to find those things that my analysts can't see so we can use our limited analytics assets on things only humans can do," said Blackwell.

Maj Gen Anthony Genatempo, Program Executive Officer, Cyber and Networks Air Force Life Cycle Management Center, C3I&N, said you need the talent to also ensure use cases for generative AI work out. "I want to tackle one aspect of our business we do to see if AI can help us out. Right now, I want to cut my contracting timeline from 18 months to 14 days," said Genatempo. "There are aspects of the workforce who thinks AI is about getting rid of them. I'm not getting rid of one person. People that know how to use these tools will replace people who don't."

Generative AI is a cultural opportunity. Raman noted that "culture eats strategy for breakfast" so AI leaders need to make sure "the AI journey is aligned with organizational beliefs."

Culture was a theme echoed by General Chance Saltzman, Chief of Space Operations for US Space Force. He said government needs a different type of leader who knows how to innovate within government. Critical thinking will be critical.

Urs Hölzle, Google Fellow, said cultures need to evolve with an eye on longer term projects and a tolerance for failure. Takeaways from Hölzle on culture include:

  • Cultural change is key to enabling transformative innovation within organizations.
  • Embracing failure as part of the innovation process is crucial. Different projects should be categorized (e.g., core, experimental) to manage risk appropriately.
  • It's tempting to rely on legacy methods in moments of pressure, but true progress requires focusing on new solutions and resisting this tendency.
  • Structured prioritization helps ensure that resources are allocated effectively, avoiding the pitfall of focusing only on short-term wins.
  • Effective leaders foster a culture that embraces learning from failures while being clear about project expectations.
Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Amazon invests in X-Energy Reactor, fuels small modular nuclear reactor run

Amazon is investing in X-Energy Reactor Company's $500 million venture round as it becomes clear that AI factories will be increasingly tethered to nuclear reactors.

The company's investment lands a day after Google made a similar move. Nuclear power is seeing a renaissance due to the energy needs of AI workloads.

Under the X-energy investment round, Amazon’s Climate Pledge Fund, Citadel Founder and CEO Ken Griffin, affiliates of Ares Management Corporation, NGP, and the University of Michigan are investors.

X-energy aims to bring more than 5 gigawatts online to the US by 2039. If X-energy hits its target it will have the largest commercial deployments of small nuclear reactors (SMRs). As part of the deal, Amazon committed to support an initial 320-megawatt project with Energy Northwest.

The money will be used to fund X-energy's reactor design and licensing and the first phase of its fuel fabrication facility. X-energy and Amazon will also collaborate to standardize deployments and financing models.

These SMR companies are in the early stages, but raking in funding. Many of the timelines for commercial deployments extend into 2030 or later.

X-energy features are its Xe-100 SMR design and TRISO-X fuel. Each reactor unit is engineered to provide 80 MW of electricity and is optimized in multi-unit plants ranging from 320 MW to 960 MW. These SMRs can be shipped via road, which should enable easier scaling.

 

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer