This list celebrates changemakers creating meaningful impact through leadership, innovation, fresh perspectives, transformative mindsets, and lessons that resonate far beyond the workplace.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Softbank said it will acquire chipmaker Ampere Computing in a $6.5 billion all-cash deal. Oracle and Carlyle, Ampere's primary investors, have agreed to sell their stakes.
Carlyle owned 59.65% of Ampere, Oracle owned 32.27% and Arm owned 8.08%. Arm is a subsidiary of Softbank Group and Ampere is a licensee of Arm.
According to Softbank, Ampere, founded in 2018, will operate as a wholly owned subsidiary of the company. Softbank has been doubling down on AI infrastructure and has invested in the Stargate project and Cristal Intelligence in partnership with OpenAI and has venture funding spread across multiple startups.
Ampere designs high-performance, energy efficient processors for cloud and AI workloads. Softbank said: "Ampere's expertise in developing and taping out ARM-based chips can be integrated, complementing design strengths of Arm Holdings."
Softbank CEO Masayoshi Son said in a statement that Ampere accelerates its vision for "Artificial Super Intelligence" and "deepens our commitment to AI innovation in the United States."
Ampere's processors are Altra, Altra Max and Ampere One family. The company has focused on cloud-native energy efficient processing, but has pivoted to AI workloads too.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Nvidia's GTC conference kicked off with a long keynote from CEO Jensen Huang, a roadmap extending in 2028 and an integrated AI stack that's hard for rivals to match.
Here's a look at the questions that are lingering after GTC kicked off.
Can Nvidia's cadence keep demand going?
Nvidia's ability to cannibalize itself with an annual cadence and dangle enough value and performance to convince customers to upgrade has been impressive.
What's unclear is whether this roadmap can keep being Nvidia's greatest trick. Nvidia CEO Jensen Huang joked during his keynote that his salespeople aren't going to be happy that he keeps dissing Hopper, the GPU that arguably started an AI boom. But Blackwell is way better. Blackwell Ultra will be better than Blackwell. Vera Rubin, Rubin Ultra and Feynman will all be better than what was there a year before.
Huang's bet is that AI will lead to a continuing scale up and scale out cycle for AI factories. Once you scale up, scaling out will lead to better cost of ownership. "Rubin will bring costs down dramatically," said Huang.
Here's the catch: Agentic AI will lead to more AI infrastructure. Cheaper models will bring more consumption as will enterprise use cases. The wrinkle is that Nvidia's big customers--AWS, Microsoft Azure, Google Cloud and Meta--all are building custom silicon to lessen their dependence on Nvidia. Can hyperscalers catch up and do they even half to if good enough AI infrastructure becomes the norm? Nvidia's answer is that it can deliver performance and value faster.
Can DeepSeek and cheaper models add to Nvidia's moat?
Nvidia's GTC opener revolved around reasoning models and ways to scale. There's a good reason for that--Wall Street is worried that reasoning models will lessen the need to spend on Nvidia gear.
The jury is way out on the DeepSeek impact, but I'd call the impact on Nvidia mostly a coin flip. Cheaper models may speed up enterprise usage and benefit Nvidia. Or cheaper models may mean good enough AI infrastructure means the latest GPU can wait.
The short answer is that Nvidia's AI factory vision is going to be reality. The debate is over timing and whether there will be hiccups or overcapacity at some point.
Nvidia's roadmap is public and on a one-year rhythm because you need time to plan AI factories. You need energy, which is the gating factor for AI, as well as land and all of this stuff that goes beyond infrastructure.
Nvidia has a roadmap to Gigawatt AI factories. How fast that road gets paved remains to be seen.
Is Nvidia now that de facto enterprise infrastructure provider?
If you believe that AI will be at the center of every workload, it's a no-brainer to think that Nvidia will power most data centers. There's a reason that Nvidia has expanded so heavily into networking and even desktops. It wants to offer you the full stack.
It remains to be seen whether enterprises build out on-prem AI operations, but that's why Nvidia is also focused on software and open-sourcing models. Its Llama-based models tailored for industry use cases will be used by SAP, ServiceNow and others.
Whether the Nvidia stack becomes the enterprise stack remains to be seen, but I wouldn't rule it out. GM is betting on Nvidia for its AI factory and the industry references cited by Huang are impressive.
All you have to do is look at Nvidia's networking and storage plans to realize the company has more on its mind than GPUs. The key vendors in compute, storage and networking are all following Nvidia's lead.
How long until Nvidia's robotics vision becomes reality?
Huang spent a lot of time talking about models for robotics and the future.
Nvidia's bet is that there will be billions of digital workers to collaborate with humans, there will be a shortage of employees and robots will fill that gap. Robots are likely to be less expensive, but don't bet against a $50,000 annual cost.
There was some evidence that Nvidia's autonomous vehicle business was gaining traction in its most recent quarter. Robots--even the humanoid variety--may be here closer than you'd think due to models that can do a lot more than language. Watch Nvidia's physical AI push closely since it's the enabler for robotics going forward.
How underappreciated is Nvidia's software stack?
Yes, Nvidia pays the bills with accelerating computing systems, but its software stack is what maintains the company's dominance.
Aside from the bevy of models to advance various enterprise use cases, Nvidia Dynamo is a sleeper hit in GTC. Huang said Dynamo is the "operating system of the AI factory."
Dynamo separates the processing and generation phases of large language models on different GPUs. Nvidia said Dynamo optimizes each phase to be independent and maximize resources.
By breaking various AI workloads up and optimizing compute, Dynamo may become the enabler for Nvidia's entire stack. When running the DeepSeek-R1 model on a large cluster of GB200 NVL72 racks, Dynamo boosts the number of tokens by 30x per GPU.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Oracle and Nvidia expanded their partnership in a move that will bring Nvidia AI Enterprise, Nvidia Blackwell GB200 NVL72 and agentic AI blueprints to Oracle Cloud Infrastructure.
Constellation Research analyst Holger Mueller said the expanded partnership between the two companies makes sense.
"Oracle is working hard to become the premier place for enterprises to tap into Nvidia resources. The strategy is key further build out Oracle’s lead in OCI and transactional enterprises. And as Oracle has not announced any plans to build custom AI chips – they are a naturally preferred partner for Nvidia. Joint customers will welcome this announcement."
Indeed, Oracle's cloud infrastructure (OCI) is being used by enterprise and hyperscale customers for training. Details of the expanded partnership include the following:
Oracle said that Nvidia AI Enterprise will be available natively in OCI Console. The move will reduce the time to deploy the service and provide direct billing and support. The OCI Console with Nvidia AI Enterprise will be available in Oracle's distributed cloud.
OCI customers will have access to more than 160 tools for training and inference as well as Nvidia NIM microservices.
OCI will be among the first cloud providers to offer customers the next-gen Nvidia Blackwell chips. Specifically, OCI is now offering Nvidia Blackwell GB200 NVL72 on OCI Supercluster with up to 131,072 Nvidia GPUs.
Larry Ellison, CTO of Oracle, teased the Nvidia-based supercluster when the company reported third quarter earnings. Ellison was touting Oracle's strong infrastructure as a service growth.
Ellison said:
"AI training and multi-cloud database are experiencing hyper growth. We are in the process of building a gigantic 64,000 GPU, liquid-cooled NVIDIA GB 200 cluster for AI training. Our multi-cloud business at Amazon, Google and Microsoft grew 200% in the last three months alone. But in addition to these rapidly growing existing businesses, new customers and new businesses are migrating to the Oracle Cloud at an unprecedented rate.
The capability we have is to build these huge AI clusters with technology that actually runs faster and more economically than our competitors."
Oracle said it is taking orders for its AI supercomputer with Nvidia Blackwell Ultra GB300 GPUs.
The two companies said they will enable vector embeddings and vector indexes in AI Vector Search workloads in Oracle Database 23ai using Nvidia GPUs.
OCI AI Blueprints will provide no-code deployment recipes without manually provisioning infrastructure. OCI AI Blueprints will reduce GPU onboarding time with hardware recommendations, Nvidia NIM and observability tools.
Nvidia NIM now in OCI Data Science. Nvidia NIM will be available directly in OCI Data Science for real-time AI inference use cases.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Nvidia launched DGX Spark, formerly Project Digits, and DGX Stations as it aims to bring AI supercomputers to students, developers, researchers and data scientists.
Project Digits made a splash at CES 2025 and now Nvidia is making good on its expansion plans. Nvidia is hoping to enable users to develop and run models locally before uploading them to the cloud for production.
DGX Spark and DGX Station will run on Nvidia's Grace Blackwell architecture that powers data centers. Asus, Dell, HP and Lenovo will build DGX Spark and DGX Station devices.
Nvidia CEO Jensen Huang said DGX Spark and Station are a "new class of computers." "With these new DGX personal AI computers, AI can span from cloud services to desktop and edge applications," said Huang.
Here's a look at the details of the DGX systems.
DGX Spark
DGX Spark runs on the Nvidia GB10 Grace Blackwell Superchip that features a Blackwell GPU, fifth-gen Tensor Cores and FP4 support.
DGX Spark will deliver up to 1,000 trillion operations per second of AI
compute for fine-tuning and inference with the latest AI reasoning models.
The device will include models such as Nvidia Cosmos Reason and Nvidia GR00T N1.
The GB10 Superchip uses NVIDIA NVLink-C2C interconnect technology to deliver a CPU+GPU-coherent memory model with 5x the bandwidth of fifth-generation PCIe.
Reservations for DGX Spark are open at Nvidia's site.
DGX Station
Nvidia's DGX Station is the first desktop built with the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip.
DGX Station has 784GB of coherent memory space.
The device has Nvidia's ConnectX-8 SuperNIC with support for networking at up to 800Gb/s.
DGX Station has Nvidia's CUDA-X AI platform, access to NIM microservices and AI Enterprise.
DGX Station will be available from Asus, BOXX, Dell, HP, Lambda and Supermicro later this year.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Nvidia launched a family of open reasoning AI models designed for agentic AI as well as new world foundation models.
The company launched the Nvidia Llama Nemotron reasoning models that are designed for on-demand AI reasoning. Nvidia took the Llama models and enhanced them during post training to improve multistep math, coding, reasoning and complex decision-making.
According to Nvidia, the refinements made to Llama boosted accuracy by 20% compared to the base model and optimized inference speed 5x. Llama Nemotron models land with support from a variety of partners including Accenture, CrowdStrike, Microsoft, SAP and ServiceNow.
Llama Nemotron models are available as Nvidia NIM microservices in Nano, Super and Ultra sizes for various deployments.
Nano is geared toward PCs and edge devices.
Super is designed for the best accuracy and throughput on a single GPU.
Ultra is designed for multi-GPU servers.
Nvidia's bet is that by open sourcing tools, datasets and post-training optimization, enterprises will build custom reasoning models.
While Llama Nemotron is focused on agentic AI, Nvidia is also pushing into physical AI models for robotics. Nvidia launched a set of new Cosmos world foundation models.
The Cosmos models include:
Cosmos Transfer, which ingest structured video inputs such as maps, depth maps and lidar scans to create photoreal video outputs. Cosmos Transfer will streamline AI training as well as simulations and ground truth.
Cosmos Predict, which will enable multi-frame generation and predict intermediate actions.
Cosmos Reason, a world foundation model that will offer chain-of-thought reasoning in natural language.
For good measure, Nvidia announced Isaac GR00T N1, a humanoid robot foundation model.
Isaac GR00T N1 includes generalized skills and reasoning to human robots.
Nvidia is surrounding the model with simulation frameworks and blueprints. The company said the Isaac GR00T Blueprint for generating synthetic data as well as Newton, an open-source physics engine, will go with the new humanoid robotics model. Google DeepMind, Nvidia and Disney Research will collaborate on Newton.
Isaac GR00T N1 has a dual system architecture including a fast thinking action model and a slow thinking one that's for deliberate decisions. Key points:
System 2 is powered by a vision language model that reasons about its environment and instructions to plan action.
System 1 then translates system 2 data into robot movements. System 1 is trained on human demonstration data and synthetic data.
Isaac GR00T N1 can generalize tasks such as grasping and moving objects.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Nvidia launched Blackwell Ultra, which aims to boost training and test time inference, as the GPU giant makes the case that more efficient models such as DeepSeek still require its integrated AI factory stack of hardware and software.
The company also launched Dynamo, an open-source framework that disaggregates the AI reasoning process to optimize compute. For good measure, Nvidia laid out its plans for the next two years.
In the leadup to Nvidia GTC, where Blackwell Ultra was announced by CEO Jensen Huang, has been interesting. The rise of DeepSeek and models that can efficiently reason at lower costs created some doubt about whether hyperscalers would need to spend heavily on Nvidia's stack.
Huang said that Blackwell Ultra boosts training and test-time scaling inference. The idea is that applying more compute during inference improves accuracy and paves the way for AI reasoning, agentic AI and physical AI. "Reasoning and agentic AI demand orders of magnitude more computing performance," said Huang, who noted that Blackwell Ultra is a versatile platform for pre-training, post-training and reasoning AI inference. "The amount of computation we need at this point as a result of agentic AI as a result of reasoning, is easily 100 times more than we thought we needed."
Nvidia's Huang in his keynote at Nvidia GTC made the case that Blackwell shipments are surging and that the insatiable demand for AI compute--and Nvidia's stack--continues. Huang said Nvidia's roadmap is focused on building out AI factories and laying out investments years in advance. "We don't want to surprise you in May," said Huang.
In a nutshell, Nvidia is sticking to its annual cadence, but sticking with the same chassis. The roadmap consists of the following:
Vera Rubin in second half of 2026.
Rubin Ultra in second half of 2027.
Huang said Nvidia's annual cadence is about "scaling up, then scaling out."
To that end, Huang noted that Nvidia's roadmap will require bets on networking and photonics.
Key points about Blackwell Ultra include:
The platform is built on the Blackwell architecture launched a year ago.
Blackwell Ultra includes Nvidia GB300 NVL72 rack-scale system and the Nvidia HGX B300 NVL16 system.
Nvidia GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm
Neoverse-based NVIDIA Grace CPUs. That setup enables AI models to tap into compute to come up with different solutions to problems and break down requests into steps.
GB300 NVL72 has 1.5x the performance of its predecessor.
Nvidia argued that Blackwell Ultra can increase the revenue opportunity of 50x for AI factories compared to Hopper.
GB300 NVL72 will be available on DGX Cloud, Nvidia's managed AI platform.
Nvidia DGX SuperPOD with DGX GB300 systems use the GB300 NVL72 rack design as a turnkey architecture.
Blackwell Ultra is aimed at agentic AI, which will need to reason and act autonomously, and physical AI, which is critical to robotics and autonomous vehicles.
To scale out Blackwell Ultra, Nvidia said the platform will integrate with its Nvidia Spectrum-X Ethernet and Nvidia Quantum-X800 InfiniBand networking systems. Cisco, Dell Technologies, HPE, Lenovo and Supermicro are vendors that will offer Blackwell Ultra servers in addition to a bevy of contract equipment providers. Cloud hyperscalers AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will offer Blackwell Ultra instances along with specialized GPU providers such as CoreWeave, Crusoe, Nebius and others.
Constellation Research analyst Holger Mueller said:
"Nvidia is doubling down on its platform with Blackwell Ultra but also the the software and storage stack. At the same time Nvidia knows it has to fight to stay in the cloud data center as the cloud vendors are building out their inhouse AI platforms. Robotic automation - creating workloads for Nvidia is another strategy thar Huang and team are pursuing.
Huang threw chipmaking in a tizzy announcing Blackwell in a 1 year cycle from Hopper, unheard of in chip making. Nvidia has delivered.
The question is how do Nvidia's plans stack up vis-a-vis the cloud vendors in-house plans. Do AWS and Microsoft have a chance? Does Nvidia cut into Google's TPU lead? If Nvidia can have AWS and Microsoft give up building their custom chips it's a mega win."
Dynamo: An open-source inference framework
Blackwell-powered systems will include Nvidia Dynamo, which is designed to scale up reasoning AI services. Nvidia Dynamo is designed to maximize token revenue generation and orchestrate and accelerate inference communication across GPUs. Huang said Dynamo is the "operating system of the AI factory."
Dynamo separates the processing and generation phases of large language models on different GPUs. Nvidia said Dynamo optimizes each phase to be independent and maximize resources.
Key points about Dynamo include:
Dynamo succeeds Nvidia Triton Inference Server.
By disaggregating workloads, Dynamo can double the performance of AI factories. Dynamo features a GPU planning engine, an LLM-aware router to minimize repeating results, low-latency communication library and a memory manager.
When running the DeepSeek-R1 model on a large cluster of GB200 NVL72 racks, Dynamo boosts the number of tokens by 30x per GPU.
Dynamo is fully open source and supports PyTorch, SGLang, Nvidia TensorRT-LLM and vLLM.
Dynamo maps the knowledge that inference systems hold in memory from serving prior requests (KV cache) across thousands of GPUs. It then routes new inference requests to GPUs that have the best match.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Adobe laid out its customer experience vision that includes multiple purpose-built AI agents, automated experiences that can adapt on the fly and scaling personalization.
That vision, unveiled at Adobe Summit, is aimed at creating a stack that includes agentic AI, automation and a platform that addresses creative marketing, the content supply chain and a unified customer experience.
Among the key Adobe Summit announcements:
Adobe launched the Adobe Experience Platform Agent Orchestrator, which includes prebuilt agents, Adobe Brand Concierge for B2B and B2C and a partner ecosystem.
The company also launched a series of GenStudio updates for the content supply chain. Adobe outlined GenStudio for Performance Marketing, GenStudio Foundation, Firefly Services, Firefly Creative Production and Express for Business. These tools do everything from activating content, curating and surfacing content options, streamlining reviews and using Firefly services without code.
Adobe launched new apps including Adobe Journey Optimizer, Experimentation Accelerator, AEM Sites Optimizer, Commerce Optimizer and additions to the company's B2B portfolio. These optimization tools are integrated with Adobe's customer data platform and include integration with content and campaign workflows and analytics.
B2B applications including B2B Agents and enhancements for Adobe Journey Optimizer and Customer Journey Analytics. The idea is to bring B2B data, content and journey orchestration together to cover each step of go-to-market operations.
Here's a look at how Adobe will deploy agents and orchestration throughout its platform.
Across the platform, there are 10 prebuilt agents. Adobe is going for the use cases--site optimization, content production, workflow orchestration, data engineering and insights and journeys--that deliver the returns for its customer base. The Experience Platform Agent Orchestrator includes multi-agent collaboration, a reasoning engine and CX language models.
Bottom line: Adobe's move to weave agentic AI throughout its platform and target markets can create what the company calls One Adobe. Dan Durn, EVP and CFO, explained the big picture on Adobe's first quarter earnings call.
"Adobe’s business has grown over the last decade by delivering world-class products grouped within three clouds: Creative Cloud, Document Cloud and Experience Cloud. In parallel, we have continued to expand cross-cloud offerings to better serve different Customer Groups. Examples include Acrobat which is reflected in Creative Cloud and Document Cloud; GenStudio which includes Creative Cloud, Express, Firefly Services and Experience Cloud; Enterprises who want to engage with One Adobe and combine Creative seats with marketing automation; and increasingly Acrobat and Express.
We believe Adobe’s success will be driven by innovation in service of both Business Professionals and Consumers and Creative and Marketing Professionals. Reporting insights and the financial performance across these customer groups will provide a clear view of Adobe’s execution against our strategy."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Google has acquired Wiz for $32 billion in an all-cash deal that will add to Google Cloud's revenue growth going forward.
The two companies were reportedly in talks about a deal valued at $23 billion a year ago. With the move, Google Cloud is leaning into cybersecurity since it already owns Mandiant.
According to Google, the Wiz purchase will give Google Cloud the ability to combine AI and cloud security across multiple clouds.
Wiz has a security platform that connects to multiple clouds and code environments including AWS, Microsoft Azure and Oracle Cloud. The company caters to companies and organizations of multiple sizes. Google said the combination of Wiz and Google Cloud will automate security at scale, lower costs, use AI to protect against new threats and respond to breaches and boost adoption.
Thomas Kurian, Google Cloud CEO, laid out the rationale for the Wiz purchase on a conference call:
"With Wiz, we believe we will vastly improve how security is designed, operated and automated, providing an end to end security platform for customers to prevent, detect and respond to incidents across all major clouds and code environments. Wiz is already an important Google Cloud partner. Wiz was recognized as our security partner of the year, and by coming together, we believe we can help customers create a stronger foundation for cloud security with a portfolio that solves for tomorrow's requirements. Our vision is to bring each of our unique strengths together to offer customers and partners a highly differentiated, Unified Security Platform."
When the deal closes, Google Cloud will include Google Threat Intelligence, Google Security Operations and Mandiant Consulting as well as Wiz. Kurian said Wiz will remain committed to offering multi-cloud security. Also: GoogleCloud revenue up 30% in Q4, Alphabet results mixed
Constellation Research's take
Chirag Mehta, analyst at Constellation Research covering cybersecurity, said:
"Google's acquisition of Wiz underscores the growing importance of cloud security, particularly as enterprises accelerate large AI workloads into the cloud. However, Google faces considerable challenges in successfully closing this acquisition. The $3.2 billion termination fee—nearly 10% of the deal's total value—provides Wiz with significant protection, reflecting potential regulatory uncertainty that previously disrupted similar negotiations. Although Wiz has been a strategic partner to Google Cloud, a substantial portion of its customer base currently uses Azure and AWS.
Google’s experience with multi-cloud offerings through previous acquisitions, such as Looker, will be beneficial. Nevertheless, delivering cloud-native security solutions on cloud platforms outside its direct control introduces an entirely new set of complexities. Google's ability to navigate these challenges—while retaining Wiz’s existing Azure and AWS customers—will be crucial to making this acquisition successful in the long term."
"If you are currently a Wiz customer but not on Google Cloud, we strongly recommend you to engage proactively with Wiz and Google to understand the roadmap for continued support and integration on non-Google Cloud environments. Given the regulatory uncertainty around this acquisition, customers running workloads primarily on Azure or AWS should ensure they have clear contingency strategies to safeguard their cybersecurity investments.
If you're a Google Cloud customer but not currently using Wiz, we encourage you to evaluate Wiz’s capabilities alongside Google Cloud’s native as well as integrated third-party cybersecurity offerings, as cybersecurity is set to become a strategic focus and significant investment priority for Google Cloud. This acquisition signals deeper, more integrated cybersecurity capabilities, as Google Cloud continues to grow.
According to Google Cloud, the addition of Wiz will create a unified security platform, threat intelligence and new threat protection along with AI agents."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Zoom Communications is adding a set of agentic AI capabilities to Zoom AI Companion so it can take actions across its platform for collaboration and customer experience.
With the move, Zoom becomes the latest vendor to put its spin on AI agents. Zoom AI Companion will get the ability to take action and orchestrate and enterprises can create custom agents.
The obvious move for Zoom includes agentic AI across Zoom Meetings, Zoom Phone, Zoom Team Chat, Zoom Docs, and Zoom Contact Center. The company is also adding customer experience capabilities including Virtual Agent for voice, AI intent routing and Advanced Quality Management.
However, Zoom is also extending Zoom AI Companion to work with third party agents and specifically mentioned an integration with ServiceNow. Zoom noted that AI Companion will know when to work with third-party and custom agents to complete tasks.
According to Zoom, the Custom AI Companion add-on, which will be available in April for $12 per user per month, will be able to work with small language models and the company's third party large language models.
Zoom said its new small language models "are trained with extensive multilingual data, optimized for specific tasks to perform complex actions, and well-positioned to facilitate multi-agent collaboration."
Although Zoom is updating its own platform with agentic AI it's clear that the company is laying the groundwork to be a player in the broader orchestration layer. Zoom launched Zoom Drive, which will be a central repository for productivity assets across Zoom Workplace.
Here’s the Zoom AI Companion stack with bring-your-own model capability being offered in the future.
Zoom is also launching industry-focused versions of Workplace for Frontline, Clinicians and Education. Zoom Workplace for Frontline will launch in April and Zoom Workplace for Clinicians will be delivered at the end of March.
Here's the rundown of what Zoom announced for AI Companion:
AI Companion will be able to manage calendars, schedule meetings, develop clips quickly and assist on writing.
Zoom said AI Companion will extend to specialized agents behind Zoom Business Services. Zoom said AI Studio will be able to create customized virtual agent. Zoom Revenue Accelerator will launch in the months ahead.
The company said its platform will be able to interact with third-party agents.
Enterprises will be able to create custom agents that work with third-party agents on service, sales, IT and HR requests.
Custom AI Companion add-on will be able to create custom meeting templates and dictionaries, and access custom meeting summaries for use cases.
Zoom Tasks with AI Companion will work across the platform to connect tasks together across Zoom Workplace.
AI Companion will get live notes for Meetings and Phone and can generate voicemail summaries and support Zoom for Microsoft Teams app.
Zoom Docs will have advanced references and queries via AI Companion. AI Companion can also automatically create data tables.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Kore.ai launched a platform to develop, deploy and manage agentic AI applications as a bevy of players race to become the orchestration layer for AI agents.
The Kore.ai Agent Platform aims to enable enterprises to create AI agents with various ranges of autonomy and connect them to applications via more than 100 prebuilt connectors for structured and unstructured data.
With its Agent Platform, Kore.ai is looking to tailor agents from guided agents to autonomous systems.
The platform includes:
Search and data AI that brings context into conversations via RAG, hybrid keyword and multi-vector weighted search, query pipelines and context enhancement.
Tools to design conversational workflows, build agentic apps and manage the system.
Multi-agent orchestration with routing, context switching and business rules.
A range of options for autonomy.
Support for any AI model, system, data source or cloud environment.
Prompt and evaluation studios to refine agent behavior.
Agent Platform SDK to extend the platform and design custom agents.
Agent Protocol, which is a standardized API for agents on various platforms to communicate.
Observability features to explain AI decisions.
AI Agents Marketplace with pre-built templates for agents and connectors. Kore.ai is focusing on industry templates for banking, healthcare, retail, HR, IT and recruiting.
Kore.ai is also using its Agent Platform to unify its portfolio. AI for Service is aimed at customer experience with AI for Work focused on employee productivity. The company also has AI for Process to automate processes and operations.