Results

Google I/O 2025: Google aims for a universal AI assistant

Google's future vision is to create leverage its data, properties and services to create a universal AI assistant that will be agentic, understand your personal context and carry out tasks. Google sees Gemini 2.5 Pro becoming a world model that can make plans, create new experiences and simulate the world.

Last year, Google was among the first to introduce agentic AI and the role of agents at Google Cloud Next and Google I/O. This year's Google I/O is about more than new models (although they play a big part) but the agentic experiences they can provide in multiple environments including Android, Google Meet and search.

"We are shipping faster than ever since last I/O. We have announced over a dozen foundation models, multiple research breakthroughs, and released over 20 major AI products and features, and it's only a slice of the innovation that's happening across the company, from search to cloud to YouTube and subscriptions and more," said Alphabet CEO Sundar Pichai, who said in a briefing that Google is leveraging its full hardware stack to roll out new models.

Google is processing 480 trillion tokens per month across its products and APIs, up from 9.7 trillion per month a year ago. That tally is likely to increase as Google rolls out its AI mode search experience with its latest Gemini models.

"It's a total reimagining of search with more advanced reasoning. You can ask longer and more complex queries, like query you see there. In fact, early testers have been responding very positively," said Pichai. "They've been asking queries two to three times, sometimes as long as five times the length of traditional searches. And you can go further with follow up questions. All of this is available as a new tab right in search."

Search is the most obvious area--and critical area for AI since it's Alphabet's profit engine--but Google is rolling out AI in multiple contexts. One interesting development is Google Beam, an AI-first video communication platform that will be rolled out with HP. Google Beam, which emerged from a project outlined at Google I/O a few years ago, takes video conferencing, combines six cameras and an AI model that creates a realistic 3D experience from the video streams.

Google Beam aims to bring 3D, AI to video conferencing with HP

Google also highlighted real-time language translations between two Google Meet coworkers--one speaking Spanish and one English.

Select Google I/O news includes:

  • Project Mariner capabilities in search and Gemini API.
  • Gemini 2.5 Pro and Flash generally available soon.
  • Gemini 2.5 Flash updates.
  • Gemini 2.5 Pro Deep Think reasoning mode.
  • Veo 3 and Imagen 4 models.
  • AI Overviews and AI Mode using Gemini 2.5.
  • Deep Search in AI Mode.
  • Search Live.
  • Gemini is coming to Chrome.
  • Try it on, a shopping feature where you can send in a full body picture and Google will use AI so you can wear it.
  • Agentic checkout, a tool to monitor prices and have an AI agent complete the purchase automatically.
  • Gemini app is getting Gemini Live with camera and screen sharing. Google apps will also come to Gemini Live.
  • Gemini software development kit will be compatible with Model Context Protocol (MCP).
  • Multiple AI driven features in the latest Android.
  • Jules, an agentic coding assistant, is in public beta. 
  • Google has partnered with Gentle Monster and Warby Parker to create glasses powered by AndroidXR people will actually want to wear. 

The latest tools will be bundled into a new Google AI Ultra subscription tier that will run $249.99 a month. That plan includes a Gemini app with 2.5 Pro Deep Think and Veo 3, highest limits with NotebookLM, Gemini in Gmail, Docs and other Google apps, Project Mariner, YouTube Premium and 30TB of storage.

The big picture: An AI experience engine

Although the news out of Google I/O is extensive--and enterprise grade once innovation is included in Google Cloud--the big picture here is where the company is headed with its models and agentic AI approach.

Think of Google's AI as its experience engine. Demis Hassabis, CEO of Google DeepMind, said the company is updating Gemini 2.5 Flash, which is popular with developers due to speed and low cost. The Flash update will land in June with a Pro update soon after.

Hassabis said:

"Our recent updates to Gemini are critical steps towards unlocking our vision for a universal AI assistant, one that's helpful in your everyday life, that's intelligent and understands the context you're in, and they can plan and take actions on your behalf across any device. This is our ultimate goal for the Gemini app, an AI that's personal, proactive and powerful."

For instance, Google's Project Astra highlights how the Gemini App can find documents, read your email with permission and carry out tasks like phoning a company and making an appointment.

Now the demo highlighted a bicycle repair and associated questions and the data was on Google--Search, YouTube and Gmail--but you can see where the company is heading. At some point, agents will be able to realistically traverse third-party data stores and services.

"The universal AI assistant will perform everyday tasks for us. It'll take care of our mundane admin and surface delightful new recommendations making us more productive and enriching our lives," said Hassabis. "We're gathering feedback about these capabilities now from trusted testers and working to bring them to Gemini Live new experiences in search, the Live API for developers, as well as new form factors like Android XR glasses, another important way to demonstrate understanding of the world is to be able to generate aspects of it accurately."

Google Cloud Next 2025:

Here's how Hassabis sees this universal AI assistant theme playing out in the short term:

  • Gemini 2.5 Pro will be able to use world knowledge and reasoning to simulate natural environments in conjunction with models like Veo. 
  • Making Gemini a world model is a step toward a universal AI assistant. 
  • The Gemini app will be transformed into a universal AI assistant. Capabilities outlined in Project Astra last year wil turn up in the Gemini app.  
  • For users, agentic AI embedded into the Gemini app will enable them to multitask better. 

Liz Reid, Head of Google Search, outlined how AI overviews will change and Vidhya Srinivasan, General Manager of Google Ads and Commerce, highlighted how AI will change shopping, answer follow-up questions and help consumers buy products throughout the customer journey. Ultimately, this AI assistant theme will run through all of Google's services. 

Reid said the search improvements are designed to move from information to intelligence. "There are a lot of times where you come to search and you're just trying to find something when you really need a recommendation. We're going to be enabling you, starting with your own searches, but also giving you an option to opt in to connect your Google Apps, things like Gmail, so that you can just get much better recommendations fit for you," she said.

Srinivasan walked through a shopping experience where Google's AI agent watched prices and options and then since it had all the billing information could complete a checkout. The consumer would have to approve the transaction, but the theme is the same. Google is moving from agents that provide information to ones that do things. "This is really the start of a search that goes beyond information to intelligence," said Srinivasan.

What's clear is that agentic AI is going to be the new user interface. That impact may happen well before agents start executing on tasks and all the mundane work in day-to-day life.

Data to Decisions Future of Work Marketing Transformation Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity Google ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Google Beam aims to bring 3D, AI to video conferencing with HP

Google launched Google Beam, a 3D video communications platform that will initially launch with HP.

The platform, which was formerly known as Project Starline, is designed to make remote video conferencing 3D and appear like participants are in the same room without glasses or headsets.

Google Beam will use an AI volumetric video model and multiple cameras to make 2D calls look immersive and in 3D. Google Beam is built on Google Cloud infrastructure.

According to Google, Google Beam will combine the AI video model with a light field display that will create a sense of dimension and depth and appear like you're in the same room with the other person.

Google also said it is exploring real-time speech translation with Google Beam. That feature is landing in Google Meet.

Coming to the workplace

HP will be the first vendor to bring Google Beam devices to market later in 2025. HP will unveil the first Google Beam devices at InfoComm, an audio-visual conference that kicks off June 7 to June 13.

However, Google said it is working with Zoom and channel partners to bring Google Beam to market.

Google said it has already been trying out Google Beam with early customers including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit.

A few thoughts:

  • Google Beam appears to be a big advance in remote collaboration.
  • Yet, the Google Beam advance is a bit ironic given that enterprises are hellbent on getting people back to the office.
  • Pricing will be the big issue for Google Beam. Of course, Google Beam will be way cheaper than those telepresence systems of yesteryear, but the price point has to appeal to prosumers and consumers too.
  • The real win will be embedding this technology into Google Meet and Zoom for market coverage. I assume that Microsoft won't be embedding Google Beam into Teams devices anytime soon.
Future of Work Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth New C-Suite Sales Marketing Digital Safety, Privacy & Cybersecurity Google Chief People Officer Chief Information Officer

Dell Technologies: Welcome to the disaggregated data center

Dell Technologies said enterprise data centers, which will increasingly support AI workloads, will become disaggregated as IT buyers look for maximum flexibility.

According to the company, the modern data center will be disaggregated and more turnkey courtesy of software automation. Private clouds will also take architecture cues from AI factories and Nvidia's designs. Current data centers typically have three tiers, multiple vendors and hyperconverged infrastructure. The data center stew gets complicated.

A screenshot of a computer

AI-generated content may be incorrect.

"What we're doing here with our software driven automation and support for open ecosystem is we are architecting outcomes for our customers, turnkey outcomes for our customers that want to take advantage of our disaggregated infrastructure and have that open flexibility for their most critical workloads. We're doing this across private cloud and edge," said Varun Chhabra, Senior Vice President of Infrastructure Solutions Group (ISG) at Dell Technologies.

The big themes in this disaggregated approach are:

  • Enterprises are figuring out how to support AI workloads via private clouds and on-premises.
  • Customers are very wary of lock-in and Broadcom's purchase of VMware has made enterprises wary. Enterprises want validated blueprints for stacks like VMware, Red Hat and Nutanix.
  • Business infrastructure incorporates hyperconverged infrastructure, which reduces flexibility but gains simplicity.
  • Companies also want automation on top of Dell servers, storage and networking gear.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

Ultimately, Dell's pitch for modern data centers are automated architectures that enable enterprises to swap out private cloud vendors in components. Dell executives said that Dell's disaggregated data center offerings are separate from its previous Apex private cloud efforts, which have been touted at Dell Technologies World in 2023 and 2024.

To build out the parts of this disaggregated data center, Dell launched the following components.

  • Dell Private Cloud, a system that can run VMware, Red Hat or Nutanix.
  • An all-flash Data Domain appliance that will feature 4x faster backup and 2x faster restores with lower power consumption.
  • Advanced ransomware protection in PowerStore systems. AI ransomware detection will be built into the appliance.
  • PowerFlex Software Defined Platform updates.
  • Native Edge updates and support for Nvidia and its latest software development kits. Native Edge will also have support for third party software vendors such as GE Digital.
  • Native Edge support for third party and existing hardware in addition to Dell endpoints.
Data to Decisions Tech Optimization dell Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

D-Wave's Advantage2 quantum computer generally available

D-Wave Quantum said its Advantage2 annealing quantum computer is now commercially available and is likely to contribute to revenue growth.

Advantage2 is available via D-Wave's Leap quantum cloud service as well as on-premises deployments. Dr. Alan Baratz, CEO of D-Wave, said the system is a milestone in the company's development and able to "solve hard problems outside the reach of one of the world’s largest exascale GPU-based classical supercomputers."

Quantum annealing is a form of quantum computing that's designed for optimization over general purpose computing. Quantum annealing shines when the goal is to find the best configuration in use cases such as logistics, finance and AI. Critics argue that quantum annealing has limited problem-solving capabilities due a lack of individual qubit control.

D-Wave is one of the few companies pursuing this approach. Superconducting qubits is seen as the quantum computing variant with the most long-term promise with IBM, Google and Rigetti Computing pursuing that approach. Trapped Ion quantum computing, championed by high fidelity and long coherence time is pushed by IonQ and Quantinuum. Microsoft is pursuing topological quantum computing and QuEra is focused on neutral atoms.

D-Wave's Advantage2 Quantum Processor has 20-way connectivity, 40% increase in energy efficiency and 755 reduction in noise. The company has already been using Advantage2 prototypes via its Leap cloud service since 2022. According the company, D-Wave has run more than 20.6 million customer problems on the system.

The company said Advantage2 has been used by Japan Tobacco, Jülich Supercomputing Center and Los Alamos National Laboratory. In the first quarter, D-Wave reported $15 million in revenue, up 509% from a year ago. Of that sales total, $12.6 million was the sale of an Advantage2 system to Jülich Supercomputing Center. D-Wave reported a first quarter net loss of $5.4 million with a cash balance of more than $304 million.

Speaking on D-Wave's earnings conference call, Baratz said there's bias against D-Wave's annealing approach compared to the rest of the world. He said there is a lot of interest in Advantage2 in governments and supercomputing centers globally, but "frankly less so in the US."

He added:

"There is a strong gate model bias in the US Government. That is something that we are working hard to address. And we're making incremental progress, but we are not there yet. We believe it's a huge mistake on the part of the US Government because frankly, other governments around the world are looking at quantum computing to help solve important hard problems today, recognizing that annealing can do that while gate model can. And the US, in my view, admittedly somewhat biased, is falling way behind on this and really needs to get that sorted out.

I would not say that we're seeing a lot of interest from the US Government in system sales at this point in time."

Nevertheless, Baratz said Advantage2 will be able to solve problems in optimization, AI and material science. An Advantage2 annealing quantum system is complete at Davidson Technology in Huntsville, Alabama and currently going through calibration and readiness testing.

D-Wave has other potential Advantage2 systems to sell in the pipeline, but Baratz said there are long lead times. D-Wave just completed its first system sale in the first quarter.

Baratz said:

"System sales tend to take time. And so, while we have a handful that we are working on and some maybe sooner than others, these are long lead sales opportunities. So, it will take us some time to get there. But we're encouraged by the level of interest based on, in part, the supremacy resolve and the demonstration of capabilities that the system has when you're able to control more of the operating parameters than possible through the quantum cloud service."

Recent quantum computing developments:

Data to Decisions Tech Optimization Innovation & Product-led Growth Quantum Computing Chief Information Officer

SAP's big plan at Sapphire 2025: Make its Joule AI agent omnipresent

SAP unveiled its next evolution of its Joule AI agent and plan is to make it omnipresent across the ERP giant's platform and even follow business users to third party systems too.

The vision for Joule aims to create seamless AI assistance within SAP and extend to other business applications. The proactive AI approach is powered by SAP's WalkMe, which takes context from business applications and UI behaviors to suggest actions, automation and agents to use.

To extend to external data and systems, SAP said Joule will combine AI platforms and SAP Knowledge Graph data to solve business problems.

CEO Christian Klein argued that Joule will bring "agenticness" to SAP's platform. SAP Business Suite will include a number of pre-built agents in customer experience, supply chain and spend management. Most of those will be delivered in the fourth quarter. SAP added that it has partnered with Google Cloud to make agents interoperable. "With the expansion of Joule, our partnerships with leading AI pioneers, and advancements in SAP Business Data Cloud, we’re delivering on the promise of Business AI as we drive digital transformations that help customers thrive in an increasingly unpredictable world," said Klein.

Joule was the headliner, but there were a bevy of smaller announcements in various categories at Sapphire

SAP is also betting that Joule can make SAP implementations easier and move customers to S/4HANA Cloud. The company launched Joule for Consultants and a set of AI tools to accelerate cloud transformation.

Customers will also see some SAP Business AI pricing changes. The base AI package will include Joule Base, which has navigational and informational capabilities, and standard embedded features. Usage in Base AI is unlimited.

A screenshot of a computerAI-generated content may be incorrect.

SAP is adding per user per month plans with Joule Premium, which will include variations of its agent for functions such as supply chain, human capital management, customer experience, and developers. These plans require a set amount of AI units.

Transactional capabilities such as Datasphere, document grounding, SAP Document AI and others will be on a consumption based model where AI units are consumed per request or record.

There are also tools to build custom Joule agents via Joule Studio in SAP Build. Enterprises will be able to build custom skills and AI agents for business needs. Joule Studio will be generally available in the second quarter.

What remains to be seen is whether SAP's omnipresent Joule approach becomes the new operating system for enterprises or simply a natural language interface. It's a question a lot of enterprise software vendors, who are used to cross-selling applications, are trying to answer.

Constellation Research's take

Constellation Research has a handful of analysts at SAP Sapphire making sense of the barrage of developments. Here's what Holger Mueller had to say:

"It's Joule, Joule, Joule at Sapphire and SAP is right to push on its AI agent as it holds the biggest potential for its customer to change business outcomes. The re-platform of SAP AI on SAP Business Data Cloud on top of Databricks is the architecture area to pay attention to. As everybody knows - AI is only as good as the underlying data. Assuming the data is right, it'll be critical to see what SAP will do on the innovation side for Finance, HCM, SCM, Purchasing and more.

After data it is the APIs that determine the capability of agent. SAP needs to show some innovation and further capabilities here. The good news is that customers are moving to the cloud - less because SAP has gotten the upgrade value proposition right - but because CxOs know the need to be in the cloud in order to leverage AI."

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity SAP ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Dell Technologies expands AI factory efforts with Nvidia, AMD, Intel, ecosystem

Dell Technologies expanded its AI factory portfolio and expanded its ecosystem with tighter partnerships with Nvidia, AMD and large language model companies with a host of new servers, networking gear and integrated systems.

The upshot: Dell is prepping AI factories for more than just cloud deployments in a bet that on-premises and air-gapped implementations will be just as important.

The announcements, delivered at Dell Technologies World, is the next phase of the company's strategy to bring AI factories to enterprises with infrastructure, an open ecosystem and services as it also sells to hyperscalers. Dell said it has more than 3,000 AI factory customers following a big push in 2024.

Varun Chhabra, Senior Vice President of Infrastructure Solutions Group (ISG), said Dell's approach to its AI factory strategy and various parts reflects the need to increase power while lowering power costs. "Our industry is facing a big challenge. GPU demand is skyrocketing because energy pace or energy capacity is struggling to keep pace," said Chhabra. "What we hear from customers most often as they talk about their retrofitting their data centers is how do they get the latest GPUs and get help with cooling and energy bottlenecks."

A screen shot of a computerAI-generated content may be incorrect.

Here's the lineup:

  • Dell PowerEdge XE9785 and XE9785L, which are servers that feature AMD Instinct MI350 Series AI GPUs, 8-way AMD Infinity Fabric interconnects, 288GB HBMe memory per GPU and liquid cooling as an option. The systems have 35 times better performance than the previous MI300X-based predecessor. Dell also said it is supporting AMD's AI stack.
  • Dell AI Factory built on Nvidia's stack that couple Dell hardware, Nvidia AI Enterprise, and managed services.
  • PowerEdge servers purpose built for model training and fine tuning. Dell PowerEdge XE9780/85/80L/85L servers can feature Intel or AMD CPUs and 8-way Nvidia HGX B300, more throughput and options for liquid or air cooling.
  • Dell PowerEdge XE7745 with RTX Pro 6000, which will be available in July, feature up to 8 Nvidia RTX Pro 6000 Blackwell Server Edition PCIe GPUs. The servers are also optimized for inferencing and acceleration and cluster networking.
  • Dell PowerEdge XE9712, which features Nvidia GB300 NVL72, and will have support for Nvidia Vera Rubin NVL144 and NVL576. This rack system is aimed at hyperscalers.

A screenshot of a computerAI-generated content may be incorrect.

  • PowerEdge servers will run Google Gemini models as part of Google Distributed Cloud.
  • Dell AI Platform with Intel will include Gaudi 3 AI accelerators coupled with an open-source software stack.
  • Dell PowerCool Enclosed Rear Door Heat Exchanger with Dell Integrated Rack Controller. The company said the system can lower cooling energy by 60% and enable customers to deploy 16% more racks with same power. For maintenance operations, Dell has hot-swappable fans, centralize monitoring and real-time insights.
  • A data platform designed to speed up throughput with its Project Lightning parallel file system. The system is designed to automate Iceberg table management, use LLMs within SQL and streamline data products and managed services. Dell AI Data Platform has a version that rides on Nvidia's models and software.
  • Dell AI Networking with low powered transceivers optimized for PowerEdge and PowerSwitch hardware.
  • Dell NativeEdge, which couples Nvidia GPUs with Dell's NativeEdge operating system for servers and endpoints. Dell has low-power AI accelerators on its NativeEdge gateways and endpoints. The company also includes NativeEdge Blueprints for Nvidia, GE Digital and Palo Alto Networks and discounted Nvidia AI Enterprise licenses.
  • Dell PowerSwitch SN5600, SN2201 and Nvidia Quantum-3 switches for Ethernet and Nvidia InfiniBand.
  • Partnerships to include software vendors and LLM players for on-premise AI factories. Models from Cohere, Google Cloud, Meta and Mistral AI are available as our Red Hat's stack.
  • Dell is also positioning its PCs as edge inferencing devices. To that end, the company launched new Dell Pro Max AI PCs, which feature neural processors from Qualcomm. The highest end model can inference a 70B parameter model.
Data to Decisions Tech Optimization dell Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

HOT TAKE: Microsoft Keeps Pace in Multi-Agent AI Race

Microsoft is making a number of announcements around its Copilot Studio and Copilot Agents during its Build event in Seattle this week. Not to be topped by ServiceNow’s recent announcement of its AI Agent orchestration tools, the highlights of this week’s releases focus on enabling developers and line of business to better unite around building, maintaining and optimizing AI for both productivity and even cost effectiveness. 

Additional Copilot Studio capabilities announced which include enhancements to AI Agent support include:

  • Multi-agent orchestration - Rather than relying on a single agent to do everything—or managing disconnected agents in silos—organizations can now build multi-agent systems in Copilot Studio (preview), where agents delegate tasks to one another. This includes those built with the Microsoft 365 agent builder, Microsoft Azure AI Agents Service, and Microsoft Fabric.
  • Computer Use in Copilot Studio agents - Agents can now interact with desktop apps and websites like a person would clicking buttons, navigating menus, typing in fields, and adapting automatically as the interface changes. This opens the door to automating complex, UI-based tasks. Bring your own model and model fine-tuning - Makers can access more than 1,900 models in Azure AI Foundry, including the latest models available in OpenAI GPT-4.5, Llama, DeepSeek, and custom models, and fine-tune them using enterprise data to generate domain-specific, high-value responses.
  • Model Context Protocol makes it easier to connect Copilot Studio to your enterprise knowledge systems.Microsoft Entra Agent ID, for agents created through Copilot Studio or Azure AI Foundry, automatically assigns agents an identity, giving security admins visibility and control in the same tool they use to manage organizational identity and access. The Agent feed hub will allow end users to oversee teams of agents in Power Apps, showing task status and flagging where an agent is stuck and needs help. And Microsoft Purview Information Protection will be extended to Copilot Studio agents that use Microsoft Dataverse, allowing organizations to automatically classify and protect sensitive data at scale.

In addition, the company is also providing some interesting new tools that can help developers and project leaders on the line of business side better account for the costs of consumption-based tools like AI agents. These updates include:

  • Enhancements to Billing and Usage - To support flexible deployment, CCS introduces pay-as-you-go (PAYG) group-level billing for agents in M365 Copilot Chat. This model ensures that organizations only pay for what they use, while maintaining full visibility and control over agent usage expenses by departments and user groups.
  • Message consumption report - The new Message consumption report supports agent management decisions by enabling AI admins to monitor billed messages, identify high-usage scenarios, and gain visibility into message consumption by agent, user, billing policy, and user-agent pair.

Microsoft is also looking to improve the effectiveness of agents by expanding the types of data and content they can consume out of the box. This is supported by two new capabilities:

  • New agent publishing channels - Copilot Studio can now publish agents to Microsoft 365 Copilot, and coming soon, will be able to publish agents to SharePoint and WhatsApp. We’re adding new categories to ground and tune your agents, including “Responses,” “Moderation,” “User Feedback,” “User Input” and “Knowledge.”
  • Knowledge controls - Copilot Studio now supports more input sources including OneDrive files, SharePoint lists, Teams and external sources such as ServiceNow, Zendesk and SAP.

The interoperability between ServiceNow and SAP is notable. As most of Microsoft’s applications customers can be considered more mid market in size and scope, but Microsoft’s AI ambitions are clearly in the enterprise, a strong multi-agent approach that incorporates common enterprise applications is table stakes. 

All of these announcements point to one increasingly obvious fact. The future of business will be “multi-agent” - both in terms of multiple agents inside single applications working together, as well as needing to orchestrate agentic flows between multiple systems to automate even the most common tasks. This is less a race for dominance and more a race for sensible interoperability given the obvious implications among legit enterprise AI providers like Microsoft, Oracle, ServiceNow, Salesforce, etc.

For growth leaders, these advancements continue to offer up opportunities to reevaluate the go-to-market tech stack, and find areas to “re-balance” between human and digital labor. This is both from an overall cost and budget perspective, but also from a perspective of reducing complexity and friction in GTM motions. Take time to evaluate these new announcements (even as they come with increasing rapidity), and strategize how they fit into your overarching AI plans. For Microsoft customers, these tools are easy to consume and test. But regardless of either how reliant you are on Microsoft technology, or if you’ve chosen (or looking to chose) another vendor as your “anchor” agentic AI provider - the truth is that in this multi-agent future, we are going to have to understand, consume and integrate with multiple agentic AI platforms on a constant basis. 

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Technology Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft wants to be your agentic AI developer stack

Microsoft wants developers to build multi-agent systems and is laying the groundwork across its tools and supporting protocols that will make it happen.

At its Build conference, Microsoft CEO Satya Nadella laid out the plan to help build what he called the open agentic web, where AI agents make decisions and carry out tasks.

Microsoft outlined broad support for Model Context Protocol, Google Cloud Agent2Agent and a new project called NLWeb. NLWeb is an open project that is akin to HTML for the agentic web and makes it possible to create a conversational interface with the model of their choice and their own data. Microsoft noted that every NLWeb endpoint is also an MCP server.

The company also rolled out pre-built agents, custom agent building blocks and multi-agent tools. These tools were available through Azure AI Foundry Agent Service, Azure AI Foundry and Microsoft Entra Agent ID, which is in preview. Microsoft also rolled out Microsoft 365 Copilot Tuning and multi-agent orchestration in Copilot Studio.

Microsoft's vision is that an agent built in Copilot Studio can pull CRM data, hand it off to an agent to build a proposal in Word, schedule meetings in Outlook. Microsoft obviously sees its own applications playing a big role in these agent AI workflows, but also working across multiple third party systems.

If the agentic AI vision behind Microsoft sounds familiar that's because there are multiple players looking to be the conductor of the AI agent orchestra. Hyperscale cloud providers (Google Cloud, AWS, Azure) want to help you build, deploy and manage agents as do SaaS platforms (Salesforce, ServiceNow and SAP) as well as integration platforms (Boomi) and systems integrators.

However, there's no doubt that Microsoft has a developer army behind it.

Microsoft outlined the following moves on the agentic AI front:

  • Microsoft is launching centralized agent identity management via Entra Agent ID. Security was also a focus for agent deployments across the portfolio. Copilot Studio will have security controls for every stage of agent creation and operation as well as privacy controls, safeguards and protections for sensitive data. According to Microsoft, Entra Agent ID will "tackle the AI agent sprawl problem by assigning a unique identifier to every agent in an environment."

A screenshot of a computerAI-generated content may be incorrect.

  • Copilot Studio will get support for multi-agent systems to delegate tasks to one another. That capability will cover agents built with Microsoft 365 agent builder, Azure AI Agents Service and Azure Fabric. Microsoft rolled out a series of toolkits and software development kits for agents.
  • MCP will be broadly supported across Microsoft's developer stack. Developers will be able to build agents for Teams using A2A protocol. Agents will be able to exchange messages, data and credentials without intermediaries. Teams will also be able to recall interactions to give agents more context.
  • The company launched Microsoft 365 Copilot Tuning, which is a low-code way to train models and create agents, multi-agent orchestration and capabilities to build agents across Microsoft applications. Microsoft also outlined the Microsoft 365 Agents Toolkit to create and customize agents using the AI stack they want.
  • Microsoft introduced the concept of Agentic DevOps, which means agents automate and optimize software development at each step. This approach will be layered in GitHub Copilot and Microsoft Azure.
  • MCP will be supported on Windows 11 so agents and applications can use tools. Microsoft said MCP on Windows 11 will be in early preview to gather feedback from developers. Microsoft said it is building a security architecture for MCP capabilities.

Other nuggets worth noting from Microsoft's barrage of Build news includes:

  • Azure AI Foundry added Grok 3 and Grok 3 mini models from xAI.
  • Azure AI Foundry now has more than 1,900 models.
  • Microsoft said developers can create AI agents with connections to Azure Databricks for real-time enterprise data processing.
  • Microsoft Dynamics 365 has new MCP servers in preview to make Dynamics 365 data and actions accessible for AI agents.
  • Windows AI Foundry launched to provide a unified platform for local AI development. Developers can bring their own models and deploy them across various platforms.
  • Post-quantum cryptography algorithms are now in preview.

Constellation Research analyst Holger Mueller said:

"Microsoft pushes boardly across it's offerings. It is likely the less developer-centric Build conference on record, but Microsoft has to infuse AI across its platforms, of course in Azure, its data layer with fabric, and don't forget Windows and Edge. This Build conference is dual push deeper into the data foundation of AI on the one side and into the AI delivery platforms. Of note is also that Build is not only all about AI, but quantum keys arrive on the 800 million Windows machines."

Related:

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AMD sells ZT Systems manufacturing business to Sanmina for $3 billion

AMD said it has sold ZT Systems' manufacturing systems to Sanmina, a contract equipment manufacturer, in a deal valued at $3 billion in cash and stock.

The chipmaker acquired ZT Systems last year for $4.9 billion. When AMD acquired ZT Systems it said the deal was about acquiring talent and expertise in designing next-generation data centers and that it would sell the manufacturing business.

AMD retains ZD Systems' AI infrastructure design and customer enablement business that will focus on cloud customers. As part of the deal, Sanmina will become a preferred new product introduction manufacturing partner for AMD's cloud rack and cluster-scale AI hardware

Jure Sola, CEO of Sanmina, said that ZT Systems' liquid cooling hardware and AI infrastructure experience will complement its portfolio.

The deal is expected to close by the end of 2025.

Tech Optimization Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Nvidia launches NVLink Fusion to connect any CPU with its GPU, AI stack

Nvidia is allowing you to bring your own CPU and custom processors to connect to its GPUs and AI infrastructure via NVLink Fusion. The upshot is that Nvidia is happy to open up on AI factories so it can develop its ecosystem.

Speaking at his Computex 2025 keynote, Nvidia CEO Jensen Huang unveiled NVLink Fusion. NVLink Fusion allows cloud providers and presumably sovereign AI efforts and ultimately private infrastructure to use any ASIC or CPU to scale out Nvidia GPUs. For cloud providers like AWS, Google Cloud and Microsoft, NVLink Fusion gives them the option to couple their custom CPUs with Nvidia.

Initially, MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys and Cadence are the early adopters of NVLink Fusion for its custom silicon. Qualcomm also announced its data center efforts and moves to integrate its CPUs into Nvidia infrastructure. Fujitsu and Qualcomm CPUs can also be integrated into Nvidia GPUs via NVLink Fusion.

The move makes sense on many fronts, but Huang summarized the strategic importance of NVLink succinctly. Huang said:

"Nothing gives me more joy than when you buy everything from Nvidia. I just want you guys to know that, but it gives me tremendous joy when you buy anything from Nvidia."

Bottom line: Nvidia will offer its fully integrated AI stack, but will also disaggregate it since in the long run the GPUs, platform and ecosystem plays are more important.

Nvidia NeMo Microservices generally available, aims for AI agent data flywheel | Nvidia GTC 2025: Six lingering questions | Nvidia launches Blackwell Ultra, Dynamo. outlines roadmap through 2027

Constellation Research analyst Holger Mueller said:

"Nvidia once again acknowledges the importance of the network for AI. The speed and efficiency how data is served to the precious and inexpensive CPUs is what matters. With allowing  partners to work with the NVLink network Nvidia prioritizes the network over its own inhouse designs - which is true to its DNA as component vendor."

Two major players missing from the NVLink Fusion announcement are Broadcom and AMD. Qualcomm CEO Cristiano Amon said partnering with Nvidia advances its efforts into the data center.

NVLink Fusion can connect custom CPUs and ASICs to GPUs via Nvidia ConnectX-8 SuperNICs, Nvidia Spectrum-X Ethernet and Nvidia Quantum-X800 InfiniBand switches, with co-packaged optics in the near future.

Other Nvidia announcements from Computex include:

More:

Data to Decisions Tech Optimization nvidia Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer