Results

SAP's big plan at Sapphire 2025: Make its Joule AI agent omnipresent

SAP's big plan at Sapphire 2025: Make its Joule AI agent omnipresent

SAP unveiled its next evolution of its Joule AI agent and plan is to make it omnipresent across the ERP giant's platform and even follow business users to third party systems too.

The vision for Joule aims to create seamless AI assistance within SAP and extend to other business applications. The proactive AI approach is powered by SAP's WalkMe, which takes context from business applications and UI behaviors to suggest actions, automation and agents to use.

To extend to external data and systems, SAP said Joule will combine AI platforms and SAP Knowledge Graph data to solve business problems.

CEO Christian Klein argued that Joule will bring "agenticness" to SAP's platform. SAP Business Suite will include a number of pre-built agents in customer experience, supply chain and spend management. Most of those will be delivered in the fourth quarter. SAP added that it has partnered with Google Cloud to make agents interoperable. "With the expansion of Joule, our partnerships with leading AI pioneers, and advancements in SAP Business Data Cloud, we’re delivering on the promise of Business AI as we drive digital transformations that help customers thrive in an increasingly unpredictable world," said Klein.

Joule was the headliner, but there were a bevy of smaller announcements in various categories at Sapphire

SAP is also betting that Joule can make SAP implementations easier and move customers to S/4HANA Cloud. The company launched Joule for Consultants and a set of AI tools to accelerate cloud transformation.

Customers will also see some SAP Business AI pricing changes. The base AI package will include Joule Base, which has navigational and informational capabilities, and standard embedded features. Usage in Base AI is unlimited.

A screenshot of a computerAI-generated content may be incorrect.

SAP is adding per user per month plans with Joule Premium, which will include variations of its agent for functions such as supply chain, human capital management, customer experience, and developers. These plans require a set amount of AI units.

Transactional capabilities such as Datasphere, document grounding, SAP Document AI and others will be on a consumption based model where AI units are consumed per request or record.

There are also tools to build custom Joule agents via Joule Studio in SAP Build. Enterprises will be able to build custom skills and AI agents for business needs. Joule Studio will be generally available in the second quarter.

What remains to be seen is whether SAP's omnipresent Joule approach becomes the new operating system for enterprises or simply a natural language interface. It's a question a lot of enterprise software vendors, who are used to cross-selling applications, are trying to answer.

Constellation Research's take

Constellation Research has a handful of analysts at SAP Sapphire making sense of the barrage of developments. Here's what Holger Mueller had to say:

"It's Joule, Joule, Joule at Sapphire and SAP is right to push on its AI agent as it holds the biggest potential for its customer to change business outcomes. The re-platform of SAP AI on SAP Business Data Cloud on top of Databricks is the architecture area to pay attention to. As everybody knows - AI is only as good as the underlying data. Assuming the data is right, it'll be critical to see what SAP will do on the innovation side for Finance, HCM, SCM, Purchasing and more.

After data it is the APIs that determine the capability of agent. SAP needs to show some innovation and further capabilities here. The good news is that customers are moving to the cloud - less because SAP has gotten the upgrade value proposition right - but because CxOs know the need to be in the cloud in order to leverage AI."

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity SAP ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Dell Technologies expands AI factory efforts with Nvidia, AMD, Intel, ecosystem

Dell Technologies expands AI factory efforts with Nvidia, AMD, Intel, ecosystem

Dell Technologies expanded its AI factory portfolio and expanded its ecosystem with tighter partnerships with Nvidia, AMD and large language model companies with a host of new servers, networking gear and integrated systems.

The upshot: Dell is prepping AI factories for more than just cloud deployments in a bet that on-premises and air-gapped implementations will be just as important.

The announcements, delivered at Dell Technologies World, is the next phase of the company's strategy to bring AI factories to enterprises with infrastructure, an open ecosystem and services as it also sells to hyperscalers. Dell said it has more than 3,000 AI factory customers following a big push in 2024.

Varun Chhabra, Senior Vice President of Infrastructure Solutions Group (ISG), said Dell's approach to its AI factory strategy and various parts reflects the need to increase power while lowering power costs. "Our industry is facing a big challenge. GPU demand is skyrocketing because energy pace or energy capacity is struggling to keep pace," said Chhabra. "What we hear from customers most often as they talk about their retrofitting their data centers is how do they get the latest GPUs and get help with cooling and energy bottlenecks."

A screen shot of a computerAI-generated content may be incorrect.

Here's the lineup:

  • Dell PowerEdge XE9785 and XE9785L, which are servers that feature AMD Instinct MI350 Series AI GPUs, 8-way AMD Infinity Fabric interconnects, 288GB HBMe memory per GPU and liquid cooling as an option. The systems have 35 times better performance than the previous MI300X-based predecessor. Dell also said it is supporting AMD's AI stack.
  • Dell AI Factory built on Nvidia's stack that couple Dell hardware, Nvidia AI Enterprise, and managed services.
  • PowerEdge servers purpose built for model training and fine tuning. Dell PowerEdge XE9780/85/80L/85L servers can feature Intel or AMD CPUs and 8-way Nvidia HGX B300, more throughput and options for liquid or air cooling.
  • Dell PowerEdge XE7745 with RTX Pro 6000, which will be available in July, feature up to 8 Nvidia RTX Pro 6000 Blackwell Server Edition PCIe GPUs. The servers are also optimized for inferencing and acceleration and cluster networking.
  • Dell PowerEdge XE9712, which features Nvidia GB300 NVL72, and will have support for Nvidia Vera Rubin NVL144 and NVL576. This rack system is aimed at hyperscalers.

A screenshot of a computerAI-generated content may be incorrect.

  • PowerEdge servers will run Google Gemini models as part of Google Distributed Cloud.
  • Dell AI Platform with Intel will include Gaudi 3 AI accelerators coupled with an open-source software stack.
  • Dell PowerCool Enclosed Rear Door Heat Exchanger with Dell Integrated Rack Controller. The company said the system can lower cooling energy by 60% and enable customers to deploy 16% more racks with same power. For maintenance operations, Dell has hot-swappable fans, centralize monitoring and real-time insights.
  • A data platform designed to speed up throughput with its Project Lightning parallel file system. The system is designed to automate Iceberg table management, use LLMs within SQL and streamline data products and managed services. Dell AI Data Platform has a version that rides on Nvidia's models and software.
  • Dell AI Networking with low powered transceivers optimized for PowerEdge and PowerSwitch hardware.
  • Dell NativeEdge, which couples Nvidia GPUs with Dell's NativeEdge operating system for servers and endpoints. Dell has low-power AI accelerators on its NativeEdge gateways and endpoints. The company also includes NativeEdge Blueprints for Nvidia, GE Digital and Palo Alto Networks and discounted Nvidia AI Enterprise licenses.
  • Dell PowerSwitch SN5600, SN2201 and Nvidia Quantum-3 switches for Ethernet and Nvidia InfiniBand.
  • Partnerships to include software vendors and LLM players for on-premise AI factories. Models from Cohere, Google Cloud, Meta and Mistral AI are available as our Red Hat's stack.
  • Dell is also positioning its PCs as edge inferencing devices. To that end, the company launched new Dell Pro Max AI PCs, which feature neural processors from Qualcomm. The highest end model can inference a 70B parameter model.
Data to Decisions Tech Optimization dell Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

HOT TAKE: Microsoft Keeps Pace in Multi-Agent AI Race

HOT TAKE: Microsoft Keeps Pace in Multi-Agent AI Race

Microsoft is making a number of announcements around its Copilot Studio and Copilot Agents during its Build event in Seattle this week. Not to be topped by ServiceNow’s recent announcement of its AI Agent orchestration tools, the highlights of this week’s releases focus on enabling developers and line of business to better unite around building, maintaining and optimizing AI for both productivity and even cost effectiveness. 

Additional Copilot Studio capabilities announced which include enhancements to AI Agent support include:

  • Multi-agent orchestration - Rather than relying on a single agent to do everything—or managing disconnected agents in silos—organizations can now build multi-agent systems in Copilot Studio (preview), where agents delegate tasks to one another. This includes those built with the Microsoft 365 agent builder, Microsoft Azure AI Agents Service, and Microsoft Fabric.
  • Computer Use in Copilot Studio agents - Agents can now interact with desktop apps and websites like a person would clicking buttons, navigating menus, typing in fields, and adapting automatically as the interface changes. This opens the door to automating complex, UI-based tasks. Bring your own model and model fine-tuning - Makers can access more than 1,900 models in Azure AI Foundry, including the latest models available in OpenAI GPT-4.5, Llama, DeepSeek, and custom models, and fine-tune them using enterprise data to generate domain-specific, high-value responses.
  • Model Context Protocol makes it easier to connect Copilot Studio to your enterprise knowledge systems.Microsoft Entra Agent ID, for agents created through Copilot Studio or Azure AI Foundry, automatically assigns agents an identity, giving security admins visibility and control in the same tool they use to manage organizational identity and access. The Agent feed hub will allow end users to oversee teams of agents in Power Apps, showing task status and flagging where an agent is stuck and needs help. And Microsoft Purview Information Protection will be extended to Copilot Studio agents that use Microsoft Dataverse, allowing organizations to automatically classify and protect sensitive data at scale.

In addition, the company is also providing some interesting new tools that can help developers and project leaders on the line of business side better account for the costs of consumption-based tools like AI agents. These updates include:

  • Enhancements to Billing and Usage - To support flexible deployment, CCS introduces pay-as-you-go (PAYG) group-level billing for agents in M365 Copilot Chat. This model ensures that organizations only pay for what they use, while maintaining full visibility and control over agent usage expenses by departments and user groups.
  • Message consumption report - The new Message consumption report supports agent management decisions by enabling AI admins to monitor billed messages, identify high-usage scenarios, and gain visibility into message consumption by agent, user, billing policy, and user-agent pair.

Microsoft is also looking to improve the effectiveness of agents by expanding the types of data and content they can consume out of the box. This is supported by two new capabilities:

  • New agent publishing channels - Copilot Studio can now publish agents to Microsoft 365 Copilot, and coming soon, will be able to publish agents to SharePoint and WhatsApp. We’re adding new categories to ground and tune your agents, including “Responses,” “Moderation,” “User Feedback,” “User Input” and “Knowledge.”
  • Knowledge controls - Copilot Studio now supports more input sources including OneDrive files, SharePoint lists, Teams and external sources such as ServiceNow, Zendesk and SAP.

The interoperability between ServiceNow and SAP is notable. As most of Microsoft’s applications customers can be considered more mid market in size and scope, but Microsoft’s AI ambitions are clearly in the enterprise, a strong multi-agent approach that incorporates common enterprise applications is table stakes. 

All of these announcements point to one increasingly obvious fact. The future of business will be “multi-agent” - both in terms of multiple agents inside single applications working together, as well as needing to orchestrate agentic flows between multiple systems to automate even the most common tasks. This is less a race for dominance and more a race for sensible interoperability given the obvious implications among legit enterprise AI providers like Microsoft, Oracle, ServiceNow, Salesforce, etc.

For growth leaders, these advancements continue to offer up opportunities to reevaluate the go-to-market tech stack, and find areas to “re-balance” between human and digital labor. This is both from an overall cost and budget perspective, but also from a perspective of reducing complexity and friction in GTM motions. Take time to evaluate these new announcements (even as they come with increasing rapidity), and strategize how they fit into your overarching AI plans. For Microsoft customers, these tools are easy to consume and test. But regardless of either how reliant you are on Microsoft technology, or if you’ve chosen (or looking to chose) another vendor as your “anchor” agentic AI provider - the truth is that in this multi-agent future, we are going to have to understand, consume and integrate with multiple agentic AI platforms on a constant basis. 

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Technology Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft wants to be your agentic AI developer stack

Microsoft wants to be your agentic AI developer stack

Microsoft wants developers to build multi-agent systems and is laying the groundwork across its tools and supporting protocols that will make it happen.

At its Build conference, Microsoft CEO Satya Nadella laid out the plan to help build what he called the open agentic web, where AI agents make decisions and carry out tasks.

Microsoft outlined broad support for Model Context Protocol, Google Cloud Agent2Agent and a new project called NLWeb. NLWeb is an open project that is akin to HTML for the agentic web and makes it possible to create a conversational interface with the model of their choice and their own data. Microsoft noted that every NLWeb endpoint is also an MCP server.

The company also rolled out pre-built agents, custom agent building blocks and multi-agent tools. These tools were available through Azure AI Foundry Agent Service, Azure AI Foundry and Microsoft Entra Agent ID, which is in preview. Microsoft also rolled out Microsoft 365 Copilot Tuning and multi-agent orchestration in Copilot Studio.

Microsoft's vision is that an agent built in Copilot Studio can pull CRM data, hand it off to an agent to build a proposal in Word, schedule meetings in Outlook. Microsoft obviously sees its own applications playing a big role in these agent AI workflows, but also working across multiple third party systems.

If the agentic AI vision behind Microsoft sounds familiar that's because there are multiple players looking to be the conductor of the AI agent orchestra. Hyperscale cloud providers (Google Cloud, AWS, Azure) want to help you build, deploy and manage agents as do SaaS platforms (Salesforce, ServiceNow and SAP) as well as integration platforms (Boomi) and systems integrators.

However, there's no doubt that Microsoft has a developer army behind it.

Microsoft outlined the following moves on the agentic AI front:

  • Microsoft is launching centralized agent identity management via Entra Agent ID. Security was also a focus for agent deployments across the portfolio. Copilot Studio will have security controls for every stage of agent creation and operation as well as privacy controls, safeguards and protections for sensitive data. According to Microsoft, Entra Agent ID will "tackle the AI agent sprawl problem by assigning a unique identifier to every agent in an environment."

A screenshot of a computerAI-generated content may be incorrect.

  • Copilot Studio will get support for multi-agent systems to delegate tasks to one another. That capability will cover agents built with Microsoft 365 agent builder, Azure AI Agents Service and Azure Fabric. Microsoft rolled out a series of toolkits and software development kits for agents.
  • MCP will be broadly supported across Microsoft's developer stack. Developers will be able to build agents for Teams using A2A protocol. Agents will be able to exchange messages, data and credentials without intermediaries. Teams will also be able to recall interactions to give agents more context.
  • The company launched Microsoft 365 Copilot Tuning, which is a low-code way to train models and create agents, multi-agent orchestration and capabilities to build agents across Microsoft applications. Microsoft also outlined the Microsoft 365 Agents Toolkit to create and customize agents using the AI stack they want.
  • Microsoft introduced the concept of Agentic DevOps, which means agents automate and optimize software development at each step. This approach will be layered in GitHub Copilot and Microsoft Azure.
  • MCP will be supported on Windows 11 so agents and applications can use tools. Microsoft said MCP on Windows 11 will be in early preview to gather feedback from developers. Microsoft said it is building a security architecture for MCP capabilities.

Other nuggets worth noting from Microsoft's barrage of Build news includes:

  • Azure AI Foundry added Grok 3 and Grok 3 mini models from xAI.
  • Azure AI Foundry now has more than 1,900 models.
  • Microsoft said developers can create AI agents with connections to Azure Databricks for real-time enterprise data processing.
  • Microsoft Dynamics 365 has new MCP servers in preview to make Dynamics 365 data and actions accessible for AI agents.
  • Windows AI Foundry launched to provide a unified platform for local AI development. Developers can bring their own models and deploy them across various platforms.
  • Post-quantum cryptography algorithms are now in preview.

Constellation Research analyst Holger Mueller said:

"Microsoft pushes boardly across it's offerings. It is likely the less developer-centric Build conference on record, but Microsoft has to infuse AI across its platforms, of course in Azure, its data layer with fabric, and don't forget Windows and Edge. This Build conference is dual push deeper into the data foundation of AI on the one side and into the AI delivery platforms. Of note is also that Build is not only all about AI, but quantum keys arrive on the 800 million Windows machines."

Related:

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AMD sells ZT Systems manufacturing business to Sanmina for $3 billion

AMD sells ZT Systems manufacturing business to Sanmina for $3 billion

AMD said it has sold ZT Systems' manufacturing systems to Sanmina, a contract equipment manufacturer, in a deal valued at $3 billion in cash and stock.

The chipmaker acquired ZT Systems last year for $4.9 billion. When AMD acquired ZT Systems it said the deal was about acquiring talent and expertise in designing next-generation data centers and that it would sell the manufacturing business.

AMD retains ZD Systems' AI infrastructure design and customer enablement business that will focus on cloud customers. As part of the deal, Sanmina will become a preferred new product introduction manufacturing partner for AMD's cloud rack and cluster-scale AI hardware

Jure Sola, CEO of Sanmina, said that ZT Systems' liquid cooling hardware and AI infrastructure experience will complement its portfolio.

The deal is expected to close by the end of 2025.

Tech Optimization Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Nvidia launches NVLink Fusion to connect any CPU with its GPU, AI stack

Nvidia launches NVLink Fusion to connect any CPU with its GPU, AI stack

Nvidia is allowing you to bring your own CPU and custom processors to connect to its GPUs and AI infrastructure via NVLink Fusion. The upshot is that Nvidia is happy to open up on AI factories so it can develop its ecosystem.

Speaking at his Computex 2025 keynote, Nvidia CEO Jensen Huang unveiled NVLink Fusion. NVLink Fusion allows cloud providers and presumably sovereign AI efforts and ultimately private infrastructure to use any ASIC or CPU to scale out Nvidia GPUs. For cloud providers like AWS, Google Cloud and Microsoft, NVLink Fusion gives them the option to couple their custom CPUs with Nvidia.

Initially, MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys and Cadence are the early adopters of NVLink Fusion for its custom silicon. Qualcomm also announced its data center efforts and moves to integrate its CPUs into Nvidia infrastructure. Fujitsu and Qualcomm CPUs can also be integrated into Nvidia GPUs via NVLink Fusion.

The move makes sense on many fronts, but Huang summarized the strategic importance of NVLink succinctly. Huang said:

"Nothing gives me more joy than when you buy everything from Nvidia. I just want you guys to know that, but it gives me tremendous joy when you buy anything from Nvidia."

Bottom line: Nvidia will offer its fully integrated AI stack, but will also disaggregate it since in the long run the GPUs, platform and ecosystem plays are more important.

Nvidia NeMo Microservices generally available, aims for AI agent data flywheel | Nvidia GTC 2025: Six lingering questions | Nvidia launches Blackwell Ultra, Dynamo. outlines roadmap through 2027

Constellation Research analyst Holger Mueller said:

"Nvidia once again acknowledges the importance of the network for AI. The speed and efficiency how data is served to the precious and inexpensive CPUs is what matters. With allowing  partners to work with the NVLink network Nvidia prioritizes the network over its own inhouse designs - which is true to its DNA as component vendor."

Two major players missing from the NVLink Fusion announcement are Broadcom and AMD. Qualcomm CEO Cristiano Amon said partnering with Nvidia advances its efforts into the data center.

NVLink Fusion can connect custom CPUs and ASICs to GPUs via Nvidia ConnectX-8 SuperNICs, Nvidia Spectrum-X Ethernet and Nvidia Quantum-X800 InfiniBand switches, with co-packaged optics in the near future.

Other Nvidia announcements from Computex include:

More:

Data to Decisions Tech Optimization nvidia Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Lessons from early AI agent efforts so far

Lessons from early AI agent efforts so far

AI agent projects are just starting and many are barely proof of concepts. What’s clear is your boardroom is about to ask for the agentic AI plan if they haven’t already.

With that backdrop, we dropped in on Boomi World 2025 to get a feel for what enterprises are doing in agentic AI and emerging best practices. Here’s a look at some early lessons to ponder.

It’s early. If Boomi World 2024 was about plotting a course toward agentic AI platforms and enabling enterprises, Boomi World 2025 was about putting the tools in the hands of what Boomi CEO Steve Lucas called “digital alchemists.” Boomi put its Agentstudio in the hands of its customers to build things with a large well of free built-in messages and consumption tiers. It’s obvious that 2026 will be about agents that graduated to production. No matter what the vendor—ServiceNow, Google Cloud, Microsoft, Amazon Web Services, Salesforce and dozens of others—agentic AI projects are just being hatched.

“I don't know if we're a best practices state at this point,” said Boomi Innovation's Michael Bachman in an analyst session. “I think we're going to be at better practices as we iterate on that, but I don't see how we can do it without something like a control tower.”

Boomi World 2025: Boomi acquires Thru, makes case for AI orchestration, automation platform | Boomi World 2025: Agentstudio, AWS pact, 33,000 AI agents deployed | Agentic AI protocols: MCP and A2A today, many more tomorrow

A screenshot of a computerAI-generated content may be incorrect.

Experiment, but don’t rush in until you’ve sorted use cases and outcomes. Ken Maglio, Principal Architect at World Wide Technology, works in a division of large IT services firm that is spending heavily on centralized AI. As a result, Maglio’s 200-person unit had to wait a bit before investing in its own AI use cases. "We have our own needs but we are not big enough," explained Maglio.

Maglio needed the company's go ahead to invest in AI for its unit and pilots started in the third quarter of 2024. That time frame worked out since Maglio had time to avoid lock-in mistakes by other enterprises. As a result, Maglio looked to build its agentic AI orchestration on Boomi with data and LLMs via Resolve.ai. "The initial questions were how can I use AI to improve experience, drive internal costs down and figure out what agentic looks like a year from now," said Maglio. “We're on our own journey. Basically, they gave us the blessing to go forth and forge our own path and that gave us the time to ask what agentic will look like.”

AI agent projects are more about continuous improvement and options than a destination. Luke Hagstrand, Head of Enterprise AI, Boomi, walked through how Boomi was working through implementing AI agents. The biggest takeaway is that it’s early in agentic deployments and that they are really a continuum from those genAI projects. Boomi began with its ChatB rollout a year ago, focused on sales and marketing for low-risk returns and now is expanding use of AI agents throughout the company.

Lucas said: “I talk about the evolution of the self-driving enterprise. It's not going to happen overnight. This will happen process by process, piece by piece. That's what we want to give organizations. Find the low hanging fruit and deliver trust with that.”

You already have legacy AI technology to worry about. I caught up with two Boomi customers that were focused on data and system integration and already frustrated with early generative AI investments at their companies. These projects worked well enough, but were started more than a year ago. These enterprises, a consumer company and a pharmaceutical vendor, entered deals with OpenAI, which was the only game in town when the projects started. Now they can’t easily go multi-model. Yes, the enterprises have trained OpenAI models on company data and sandboxed them internally. In a nutshell, OpenAI models are the front end interface that taps into the model that has the internal data. “We did it to say we had AI and be on the bandwagon,” said of the customers.

Since you already have legacy AI technology (that’s basically any technology that’s older than 3 months), you’ll have to avoid being locked in. We covered what to look for in an agentic AI vendor last week and horizontal approaches were critical. There are architecture considerations that also matter. For instance, Hagstrand said Boomi took an API-centric approach to generative AI and now agents because it didn’t want to be tethered to any one model or vendor. Boomi uses models from OpenAI, Anthropic and Google Cloud, but can work in more models depending on the use case via Amazon Bedrock. Hagstrand said:

“If we think about this API first, we can try a lot of things and not get locked into large seat based contracts with frontier LLM companies just to try their tools and not even having a baseline for what AI adoption looks like.”

Control the user interface for additional learning and use case knowhow. Boomi’s approach to its internal agents revolved around its ChatB interface. While the models underneath, data and APIs are abstracted away, the data from the interface provides useful information on usage and use cases. “One of the benefits controlling the user interface and having API connectivity is we can track and learn,” said Hagstrand.

There’s a broader point here: Agentic AI means that enterprises can control their own interfaces and more systems are likely to become headless. Boomi CEO Steve Lucas said: “I think SAP will always exist. Workday will always exist; Oracle will always exist. Here's the real question. The real question is, how much of that exists in the future? I believe is their UI will go away entirely.”

Use cases evolve. Those generative AI use cases will come in handy because they create a foundation for agentic efforts. Start with agents that are focused on use cases that can drive measurable returns and then build up to multi-agent systems covering complex workflows. You’re not going to bite off a multi-agent system out of the gate no matter what the vendor tells you. Boomi said 78% of employees now rely on AI agents for daily tasks, productivity has improved 47% and the company has spent 75% less time on customer support requests. Here’s how Boomi’s internal AI agent use cases have evolved.

  1. First focus on sales and marketing use cases.
  2. Then expand to more deterministic use cases in customer success and support.
  3. Create multi-agent systems that handle complex workflows.
  4. And then build specialized agents for specific business functions.

Data quality matters. The companies launching genAI and now agentic AI projects thought that they had their data lake houses in order. Once these enterprises scaled, they realize further data work is required to feed the models what they need to deliver accurate answers. Your data hygiene is even more important with multi-agent systems. Some companies are creating data quality agents.

Where data lives also matters. Maglio said where data lives remains a big issue and enterprises need to figure that out first. "The data has to live somewhere," said Maglio, who noted that his division decided to migrate data to Resolve.ai from multiple data repositories. "The data issue is why we slow rolled this," said Maglio. It’s not uncommon for enterprises to have more than a handful of data lakes.

Remember to dust off those old playbooks. A few folks at Boomi World 2025 made the case that agentic AI rhymes with microservices architecture. That take is on target—especially when you consider standards are just being formed and a lot has to be worked out. Microservices took a while to gain traction since tools needed to be built. In the end, AI agents like various microservices have to share data, communicate and ultimately create a modular system that can operate as one (yet be easier to maintain). It’s possible that early agentic AI rhymes with early microservices architecture.

Don’t be scared of consumption models (yet). Hagstrand said Boomi’s internal use of agents is built around a consumption model. The argument here is that the company doesn’t want to pay for seats when not all of them will leverage agents. Agentic AI is not mature enough to make a bet on usage and eating costs on seats.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity boomi ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Alibaba Cloud Q4 growth strong as Qwen, AI workloads extend reach

Alibaba Cloud Q4 growth strong as Qwen, AI workloads extend reach

Alibaba Cloud is seeing more AI workloads, accelerating revenue growth and benefiting from its Qwen open source family of models.

Although Alibaba's overall fourth quarter results didn't impress given tariffs and economic concerns for its retail businesses, Alibaba Cloud saw fourth quarter revenue growth of 18% due to AI workloads. Alibaba Cloud revenue in the fourth quarter was $4.15 billion with operating income of $333 million.

For fiscal 2025, Alibaba Cloud delivered revenue of $16.26 billion, up 11%, with operating income of $1.45 billion.

Speaking on Alibaba's fourth quarter earnings call, CEO Eddie Wu devoted a lot of time to the company's cloud unit. "Revenue from AI related products has maintained triple digit year-over-year growth for the seventh consecutive quarter," said Wu. "We expect AI to remain a key driver of accelerated revenue growth for Alibaba Cloud."

Wu noted that there are uncertainties in the global AI supply chain, and that customer demand remains strong. AI workloads are a long-term play compared to short-term supply chain fluctuations.

According to Wu, Alibaba will do the following:

  • Continue to invest in cloud and AI infrastructure.
  • Advance foundational research and innovation in large language models via its Qwen models.
  • Leverage open source distribution for Qwen. "By the end of April we had open sourced over 200 models under the Qwen family with more than 300 million downloads worldwide and over 100,000 derivative models, making it the world's largest open source model family," said Wu.

Wu also noted a few trends Alibaba Cloud is seeing. "Among large and mid-sized enterprises, AI applications are expanding from internal systems to more customer-facing use cases. At the same time, adoption of AI product is rapidly extending from large enterprises to a growing number of small and medium-sized businesses," said Wu.

In addition, workloads are broadening. Wu said Alibaba Cloud saw AI workloads in financial services, autonomous driving, internet and online services. Now AI workloads are broadening to more traditional industries, including farming and manufacturing.

"In terms of the trends that we're seeing across these different sectors with more and more companies adopting cloud based AI services, these are companies that had been using a traditional CPU based compute that are now turning to AI and AI compute," said Wu.

 

Alibaba Cloud's strategy is to leverage Qwen at the edge to drive workloads overall. He said:

"Our open source models have a lot of edge model applications and there are also applications that are suitable -- more suitable to be run on the cloud. There's a lot of different applications. They're not going to have much of an impact in terms of driving cloud business, but because those same customers are using the Qwen models, what that means is often they're also going to require additional usage of cloud based compute resources as well. I think that the edge models to a certain extent are complementary with our cloud based large parameter models. They work well together as a business model."

Constellation Research analyst Holger Mueller said Alibaba is in a good place with its cloud business:

"The four major cloud providers are dukjng it out in each market, but Alibaba has practically a monopoly for workloads in China and for Chinese companies - and doing well accordingly. When is that market saturated? It will be years before we know."

Data to Decisions Tech Optimization Chief Information Officer

Salesforce revamps Agentforce pricing with Flex Credits: What you need to know

Salesforce revamps Agentforce pricing with Flex Credits: What you need to know

Salesforce is evolving its Agentforce pricing from a model based on $2 per conversation to a more flexible credit model. Ultimately, customers will be able to transition contracts to incorporate more credits.

The company is rolling out Flex Credits, where pricing adjusts and is based on usage and only charged when an action occurs. Today, Salesforce charges $2 per conversation for all use cases. Going forward, Salesforce will charge by actions, which are defined as specific tasks performed by agents such as updating records and answering questions. There are multiple actions possible within a conversation.

Here's the pricing summary for Flex Credits:

  • $500 USD per 100,000 Credits
  • One Agentforce action consumes 20 Flex Credits ($0.10 USD)
  • All customers with Enterprise Edition or above can get 100,000 Flex Credits for $0 with Salesforce Foundations.
  • In the summer, Salesforce will roll out Agentforce user licenses and add-ons for Agentforce for Sales, Service, and Industries, Agentforce 1 Editions for Sales, Service, Field Service, and Industries. Pricing will be announced when generally available.
  • Flex Payment Models will be announced in the fall.

Salesforce’s model arrives a few months after the launch of Agentforce, subsequent release and developer and ecosystem follow up and customer feedback on a pricing model that revolved around $2 a conversation.

What Salesforce is trying to do is thread the needle between encouraging Agentforce proof-of-concepts to production, provide one unit of measurement across use cases and futureproof spending on its platform while aligning with value created.

The pricing changes are part of a broader transformation to Salesforce models across its portfolio over the next two to three months. Multiple SaaS vendors are tweaking pricing models to account for AI agents. See: AI agents bring consumption models to SaaS: Goldilocks or headache?

Behind the Scenes: The Force Behind Agentforce | Every vendor wants to be your AI agent orchestrator: Here's how you pick | Agentic AI: Everything that’s still missing to scale

As Salesforce customers evaluate the model, which will enable enterprises to convert seats to credits without early renewal or penalties, there will be multiple moving parts to consider:

  • Customers under the Flex Portfolio will need to model multiple plans to optimize. Flex Credits will scale up and down with usage. Editions will be based on seats and Salesforce will launch new ones in June. Agreements will enable enterprises to convert seats to consumption models for users and agents. And there are pay-as-you go plans available.

A screenshot of a web page

AI-generated content may be incorrect.

  • Flex Credits will be generally available later this month for Agentforce and cost $500 per 100,000 credits. This model will likely be expanded for other products in the future.
  • Flex Credits will include concurrent external and internal agent use cases. The conversation-based model works best for singular use cases.
  • Salesforce said Flex Credits will better scale for use cases that deliver value compared to static pricing.
  • Flex Credits will allow more transparency and granularity, but also be harder to predict and model initially.
  • Salesforce will feature an Agentforce Rate Card that determines how many Flex Credits a customer consumes. These actions will be a mix of included out-of-the-box actions as well as the ability to build custom ones.
  • The Einstein 1 Edition will become the Agentforce 1 Edition in the second quarter. For Agentforce, Flex Credits will be included, enterprises can swap seats for agents over time and there are enhanced features and Slack. Einstein 1 was Salesforce's effort to consolidate its products into one suite with Data Cloud and Einstein included.

Flex Credits will be sold under three consumption models including:

  • Pre-purchased for the length of a contract and upfront payment with discounts available. This plan is available but doesn't fall under the Flex Credit model.
  • Pre-committed, which will be generally available in the third quarter. Pre-committed consumption models mean a customer commits to a contractual amount billed monthly for usage with a shortfall bill if commitment unmet.
  • Pay-as-you-Go, which is a Flex Credit plan with no commitment where you're billed on usage. That plan will be available in the third quarter too.

Here are some actions to ponder based on 1 executed action where you'd be charged.

A screenshot of a chat

AI-generated content may be incorrect.

To track and optimize spending, Salesforce is providing a set of tools for transparency and tracking. Salesforce is also developing calculators and simulations to help customers estimate costs under the new model compared to the previous structure.

The company has created a digital wallet for credits where enterprises can configure thresholds, set alerts for stakeholders and proactively monitor usage. Salesforce's Digital Wallet Usage Threshold Alerts are generally available.

Salesforce plans to launch Digital Wallet Usage Tagging, which will provide insights to manage usage based on environment, agent and feature and help customers understand what use case is driving consumption and optimize spending. Usage tagging will be available for Flex Credits in June.

Perhaps the biggest takeaway is that Salesforce is willing to iterate on its pricing as enterprises and vendors work through new AI agent based models. Nevertheless, there will be an adjustment period for the vendor and the customer.

Constellation Research’s take

Salesforce moved to evolve its model after feedback from early customers and deserves props for iterating. However, there is a lot of work ahead for Salesforce.

The biggest mission for Salesforce in this new model is educating customers. It's unclear how this model will impact costs for more complex workflows and use cases.

In addition, transparency will be critical. Tools to project and simulate use cases and associated costs will be critical.

Constellation Research analyst Holger Mueller said:

"Salesforce deserves a lot of credit for being the first vendor who put out a price for the utilization of its agents. But often, early pricing schemes don't stand the test of practicality. With the new flag space pricing, Salesforce moves more into the direction of both outcome and consumption-based pricing.

On the surface, this pricing approach is fairer for enterprises, as outcomes matter to the corporations more than just the invocation of an agent. The pricing model is also fair to Salesforce as well, as pricing needs to reflect the consumed cloud computing resources by agents and therefore needs to be resource consumption based.

Changes to pricing are always a sensitive operation, and we will see how Salesforce will fare here. Only one thing is certain: This will not be the last change of agent pricing in the market."

Liz Miller, an analyst at Constellation Research, said:

“While this move will help address the cost of AI -- a cost that organizations are still struggling to justify and extract maximum value from -- it will also help organizations get started. Right now the reality is that there is a lot of experimentation that falls short of scale.

For its part Salesforce has been focused on getting this ever changing pricing simplified as much as possible. But while Salesforce has looked to perfect a consumption model, others have chosen to build their costs into existing subscriptions noting that AI should just be part of the solution and not an additional feature or additional cost. Which path is right for tech builders and their customers is still up for debate. But the one thing we know for sure: this won't be the last pricing change as AI continues to upend all norms.”

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Cisco delivers strong Q3 amid AI infrastructure, security traction

Cisco delivers strong Q3 amid AI infrastructure, security traction

Cisco reported better-than-expected third quarter earnings as the company saw a surge in AI infrastructure demand and strong results from Splunk.

The company reported third quarter earnings of 62 cents a share on revenue of $14.1 billion, up 11% from a year ago. Non-GAAP earnings were 96 cents a share.

Wall Street was looking for fiscal third quarter non-GAAP earnings of 92 cents a share on revenue of $14.06 billion.

Cisco said product orders were up 20% from a year ago and 9% excluding Splunk. AI infrastructure orders from hyperscale vendors topped $600 million in the quarter.

As for the outlook, Cisco projected fourth quarter revenue of $14.5 billion to $14.7 billion with non-GAAP earnings of 96 cents a share to 98 cents a share. Fiscal 2025 revenue will land between $56.5 billion to $56.7 billon with non-GAAP earnings of $3.77 a share to $3.79 cents a share.

Chuck Robbins, CEO of Cisco, said it was seeing strong demand due to secure networking and AI infrastructure.

Cisco has been stepping up its visibility in emerging technology areas including AI and quantum with a push. Consider:

  • Cisco said it will join the AI Infrastructure Partnership (AIP), which is led by BlackRock, Global Infrastructure Partners (GIP), MGX, Microsoft, NVIDIA and xAI. GE Vernova and NextEra Energy also recently joined.
  • The networking giant also said it will partner with Saudi Arabia's AI enterprise HUMAIN to scale cost efficient AI infrastructure. Cisco will offer its networking and data center stack, security tools and software.
  • Cisco also said it extended a partnership with G42, a UAE-based AI company, to develop AI infrastructure. The two companies signed a memorandum of understanding that revolves around go-to-market, AI infrastructure expertise and global expansion. The two companies will also consider co-developing AI cybersecurity applications.
  • Cisco said its quantum networking chip prototype was developed with UC Santa Barbara and generates up to 1 million entangled photo pairs per second at room temperature.

By the numbers:

  • Cisco’s networking business posted third quarter revenue of $7.07 billion, up 8% from a year ago.
  • Security revenue in the third quarter was up 54% to $2.01 billion.
  • Collaboration revenue was up 4% to $1.03 billion.
  • Observability revenue was $261 million, up 24%.
Tech Optimization Data to Decisions cisco systems Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer