Results

The art, ROI and FOMO of 2025 AI budget planning

Artificial intelligence budgets will surge again in 2025, but good luck tracking expenditures with any precision as generative AI spending is lumped into other categories and driven by multiple departments.

Yes folks, it's 2025 budget season and the biggest question from CxOs on our BT150 meetup in late September revolved around whether there will be a dedicated budget for AI. Like 2024, 2025 AI budgets will be spread across multiple departments and tucked away in other areas like compliance and cybersecurity.

Why is AI spending so murky? CxOs in our BT150 meetup noted that traditional budget processes and frameworks don't align with AI science projects and unpredictable costs.

Constellation Research's BT150 meetup highlights how AI budgets are evolving. The only certainty is that enterprises plan to spend more. Constellation Research's AI Survey of 50 CxOs found that 79% of respondents are increasing AI budgets and 32% see budgets increasing 50%.

These budget increases are coming even though returns on investment have been spotty. Forty-two percent of respondents said they have deployed AI in production but haven't seen ROI. Another 29% said they've seen modest ROI.

But what is an enterprise going to do? Are companies really going to go on record saying they aren't going to spend on AI? Fears of missing out on AI and worries about forever falling behind the innovation curve are real. However, FOMO isn't much of a strategy just like hope isn't.

13 artificial intelligence takeaways from Constellation Research’s AI Forum

Rational AI spending and dedicated budgets will be a 2026 story. For now, CxOs in our network (in meetups under Chatham House rules) are noting the following:

  • Companies are not creating separate AI budgets, but incorporating components into existing business cases and projects.
  • There's a trend toward allocating enough funds to continue AI initiatives without committing massive, dedicated budgets that would require extensive justification and scrutiny.
  • The focus in 2025 is practical AI applications with ROI.
  • Funds for AI are being allocated from other areas such as regulatory compliance that may have decreased in priority.
  • There's an emphasis on information gathering and staying informed about AI developments. Companies are investing time and resources in discussions, debates, and learning about AI capabilities and risks, even if this doesn't directly translate to large dollar expenditures.
  • AI learning and training is getting more budget as enterprises look to upskill.

Simply put, AI budgets in 2025 will either poach from existing areas or be lumped into broader spending efforts. This game won't be as easy as it was in 2024 where CxOs could AI-wash damn near any project.

For context on budgets, I recently caught up with BT150 member Ashwin Rangan, who has been in the CxO game for three decades at ICANN, Rockwell International, Walmart and Bank of America. Rangan has seen his share of technology cycles. Here's what Rangan, currently Managing Director of the Insight Group, said about AI budgets and riding new technology waves.

First, Rangan noted that a lot of generative AI will be consumed in existing enterprise technology applications. That won't be new budget per se. On the other end of the spectrum there will be enterprises that see how generative AI can differentiate their businesses. They'll spend if the conditions--data, culture, talent--are in place. When budgeting for AI, enterprises need to focus and think through their FOMO and sometimes choose to hang back.

"If the ROI was clear up front, I would be quick out the gate," said Rangan, who noted he chose to be an early mover at Rockwell with SAP. "In other cases, I've chosen to wait with new technologies because while the technology looked promising, the return on investment was not necessarily as promising."

Rangan said genAI is developing so fast that first mover advantages may not last long because the roadblocks today may be resolved quickly. "The price you pay for waiting will not be high because we are all learning at the same time," he said.

Here is an early read from the BT150 and Constellation Research analysts on what'll drive the AI budget in 2025.

CRM. Salesforce's Agentforce pivot is going to garner some budget. The economics could be compelling. It'll be unclear whether Agentforce will be tucked into marketing, sales, customer experience or some other function, but AI agents are going to be everywhere.

Data management and analytics. Generative AI is seen as the big data quality and data management bailout. Enterprises, as always, need to find a way to extract value from datasets where quality might be low.

Compliance. If you want a project funded, just make sure it has some compliance component. This strategy worked well in 2024 and you'll rinse and repeat in 2025 to get AI funds.

Automation and efficiency. AI hasn't always delivered on streamlining processes and automation, but reducing manual work and boosting productivity will always get you more funding. IT efficiencies are being actively explored, but product development is also a priority to optimize processes.

Cybersecurity. AI is being integrated into security infrastructure for automation, threat detection and responses. It's likely that cybersecurity will claw back the budget that was lost to fund AI pilots.

Bottom line: 2025 budgets are just being formed and enterprises are actively trying to separate AI marketing hype and real impact. Enterprises are also concerned about integration, AI agent and generative AI sprawl and scalability. Nevertheless, enterprises are positioned to spend on AI because the risk of not investing is too great. There will be an enterprise AI spending reset at some point, but not today.

Insights Archive

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

HPE launches AMD powered AI system and highlights broader strategy

Hewlett Packard Enterprise launched an AMD-powered system designed for complex AI model training. The HPE ProLiant Compute XD685 leverages 5th Gen AMD EPYC processors and AMD Instinct MI325X accelerators.

AMD launched its latest CPUs and GPUs at its AI event. A bevy of systems makers appeared on stage with AMD CEO Lisa Su.

HPE, which also had its own AI day with a focus on cooling and scaling generative AI, said its HPE ProLiant Compute XD685 is optimized for AI clusters for large language model training, natural language processing and multi-modal training. The system is available to order today and generally available in the first quarter of 2025.

The system highlights HPE's broader AI strategy, which revolves around targeting model makers and scale deployments with governments and enterprises and a focus on liquid cooling innovation and cluster management.

HPE said its AI market opportunity is $171 billion and that it can gain with liquid-cooled servers, Ethernet network systems, support and air-cooled servers and storage.

According to HPE, HPE ProLiant Compute XD685 supports eight AMD Instinct MI325X accelerators and two AMD EPYC CPUs. It also offers both air and direct liquid cooling options. HPE is leveraging its Cray high performance computing knowhow as it expands into the AI market.

Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE, said its latest AMD system is designed to apply to multiple use cases and industries.

Key points about the HPE ProLiant Compute XD685:

  • The system has a modular 5U chassis that can accommodate a range of GPUs, CPUs, software, components and cooling.
  • HPE ProLiant Compute XD685 has AMD's CDNA 3 architecture along with MI325X accelerators.
  • The system has a compact 8-nodes-per-rack arrangement to maximize rack density for 8-way GPU systems.
  • HPE Performance Cluster Management is included with automated setup.

At an investor event, HPE outlined its AI stack including its private cloud offering with Nvidia as well as GreenLake and architectures that scale well with energy efficiency.

HPE outlined how it will lean into its cooling innovation throughout its portfolio. This new AMD AI system is the first installment in how HPE will leverage cooling options to move gear.

In addition, it's likely that HPE may offer a similar Private Cloud AI offering powered by AMD. Dell Technologies has already announced an AMD AI factory stack.

Tech Optimization Innovation & Product-led Growth Data to Decisions Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity HPE AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Event Report: Teradata Possible LA | With Constellation Analyst Doug Henschen

 

During its October event - Teradata Possible LA - Teradata announced BYO-LLM and GPU acceleration options, giving customers flexibility for #generativeAI #innovation.

Hear from Doug Henschen, VP & Principal Analyst at Constellation Research, as he gives an in-depth report LIVE from the event and unpacks the implications of Teradata's big announcements.

On <iframe width="560" height="315" src="https://www.youtube.com/embed/APtQbN9FZfw?si=Or6lEz3bh2_vEJOQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Dell lays out AMD genAI systems, services

Dell Technologies launched new systems powered by AMD's next-generation EPYC server processors and PowerEdge XC9680 system powered by AMD Instinct MI300 and 300x AI accelerators. Dell is also surrounding those AMD powered AI systems with services and the chipmaker's software stack.

With the moves, Dell is basically providing AMD AI factory building blocks along with its Nvidia systems. Dell, HPE and SuperMicro are all benefiting from AI system demand and looking to differentiate on energy consumption, cooling and overall efficiency. AMD launched its next-generation EPYC and Instinct processors at its AI event. 

"We're trying to make sure that customers are able to take advantage of the most common AI toolsets and software that they'll use for their AI workloads," said Varun Chhabra, senior vice president of Dell's ISG and Telecom unit. He added that Dell and AMD have already tested and validated the new genAI systems to cut the time to value.

Chhabra said Dell is also surrounding those AMD systems with services to implement them. Dell is also expanding its Hugging Face partnership, which gives enterprises model choices for on-premises deployments, to AMD systems.

Here's a breakdown of Dell's AMD AI announcements.

  • Dell PowerEdge R6715 and R7715 servers with AMD 5th gen EPYC processors. A new chassis design provides enhanced air cooling for 50% more cores (192 cores/CPU) with dual 500W CPUs. Dell also added larger storage configurations and enhanced heat sinks. The company said the new systems provide a 7:1 consolidation ratio from the previous generation systems and up to 65% lower CPU energy cost.

  • Dell PowerEdge XE7745. These enterprise AI systems provide more GPU density in an air-cooled 4U chassis. The system adds twice the PCIe GPU capacity with options for a diverse set of AI accelerators and optimized cooling for up to 600W GPUs.
  • Dell Generative AI AMD systems. Dell said it will package AMD's AI accelerators with its ROCm and Omnia software as well as standards-based AI and machine learning frameworks. This stack will include the PowerEdge XE9680 with AMD MI300x GPUs and services.

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience dell SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

AMD launches next-gen Instinct AI accelerators, 5th gen EPYC as it fortifies position as Nvidia counterweight

AMD launched its 5th Gen EPYC processor as well as its latest Instinct MI325X accelerators as it aims to gain AI workloads from inference to model training. The big takeaway is that AMD is well equipped to give Nvidia competition for AI workloads. 

The chipmaker said its MI325X platform will begin production in the fourth quarter with favorable performance vs. Nvidia's H200 GPUs. AMD also outlined its annual cadence as well as the roadmap head into 2025.

Lisa Su, CEO of AMD, gave a closely watched keynote at its Advancing AI 2024 event. The launch of AMD's new enterprise CPUs and GPUs are critical given the chipmaker is the best positioned to compete with Nvidia, which has dominated AI infrastructure. Su said that the data center AI accelerator market can hit $500 billion by 2028. "The data center and AI represent significant growth opportunities for AMD, and we are building strong momentum for our EPYC and AMD Instinct processors across a growing set of customers," she said. 

AMD Instinct MI325X and what's ahead

Su said the next-gen Instinct GPU will have 256GB HBM3I, 6TB/s and better performance overall compared to the previous MI300.

AMD added that the MI325X platform outperforms Nvidia H200 HGX for Meta Llama inference workloads and matches it for 8GPU training.

Su also said that its AMD Instinct MI355X Accelerator is in preview for launch in the first half of 2025.

For AMD, the game is getting its GPUs in the hyperscale clouds--Google Cloud, Microsoft Azure and Oracle were on stage with Su live or by video--as well as with key infrastructure providers such as Dell Technologies, HPE and SuperMicro. These infrastructure providers are creating data center designs that can accommodate AMD and Nvidia with future proofed infrastructure. AMD also highlighted partners such as Databricks.

Making the EPYC case for the enterprise

Su's keynote focused on a key theme for the 5th Gen EPYC processor in the data center: Enterprise returns due to lower total cost of ownership as well as taking on inference workloads.

The latest EPYC server processor is billed as the best CPU for cloud, enterprise and AI workloads. The processor, formerly code-named Turin, has 150 billion transistors, up to 192 cores and up to 5GHz built on 3nm and 4nm technology.

For the enterprise, Su said the latest EPYC has up to 1.6x performance per core in virtualized infrastructure and up to 4x throughput performance for open-source databases and video transcoding.

As for inference workloads, Su said the latest EPYC processor has up to 3.8x the AI performance for machine learning and end-to-end AI.

The broader portfolio

  • Although Instinct and EPYC were the headliners, AMD had a bevy of other offerings to round out its AI portfolio. Here's a look:
  • AMD's CDNA Next architecture will be available in 2026. AMD also touted its AMD ROCm software stack. 
  • AMD also expanded its DPU processor lineup with AMD  Pensando Salina DPU and AMD Pensando Pollara 400, the first Ultra Ethernet Consortium ready NIC.
  • AMD launched AMD Ryzen AI PRO 300 Series processors, powering Microsoft Copilot+ laptops.
     

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AMD Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

How Climate Tech and AI can Address Environmental Challenges | CR Sustainability Convos

During #ClimateWeekNYC, Constellation Research, founder R "Ray" Wang had an engaging conversation with Sol Salinas, EVP, and Sustainability Lead for Capgemini Americas on the role of #sustainability, climate #tech, and #AI in addressing environmental challenges.

The discussion explored:

📌 Capgemini's significant partner ecosystem that supports clients in their sustainability efforts.
📌 The increasing commitment and investment in sustainability by organizations globally.
📌 The focus on circularity, waste reduction, and #technologies like small modular nuclear reactors and AI to drive sustainability.
📌 The importance of regulation, transparency, and overcoming greenwashing concerns around sustainability claims.

Watch the full interview to learn more about leveraging strategy, technology, and partnerships to advance sustainability goals.

On <iframe width="560" height="315" src="https://www.youtube.com/embed/BxvdD5WUcfo?si=aHhtj97JB84P5iiD" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

How Seattle Seahawks are approaching genAI, model choices with AWS


The Seattle Seahawks are using Amazon Bedrock, generative AI and other AWS services to distribute video and content faster with a focus on quick returns as well as long-tail opportunities. Here's a look at the project and lessons learned so far.

The generative AI project, which started with this season, is part of a multi-year extension between the Seahawks and AWS, the team's official cloud, machine, AI and generative AI provider. Under the deal, the Seahawks are automating content distribution as well as transcribing, summarizing and distributing press conferences across multiple channels and languages.

AWS and the Seahawks will also integrate generative AI throughout business operations. The Seahawks and AWS first partnered in 2019 on NFL Next Gen Stats, insights on player health, performance and scouting. Lumen Field, home of the Seahawks, is also a showcase for Amazon's Just Walk Out technology.

I caught up with Kenton Olson, Seattle Seahawks Vice President of Digital & Emerging Media, to walk through the generative AI content project and what's next.

The project. Olson said the Seahawks will publish more than 1,000 videos throughout the year. The goal was to speed up the time it takes to get videos from the creation team and editors to production. After the 2023 season, the Seahawks looked to accelerate the process with Olson's content team of 11, which focuses on digital content and platforms.

"We use multiple AWS products for everything from encoding and transcribing video to hosting," said Olson. "We were excited to use Amazon Bedrock to provide some automation to the videos we're shooting to save time and get stuff out faster."

For now, the Seahawks are focused on media availability of videos and press conferences. The Seahawks will do about 300 press conferences throughout the year with plays and coaches.

In the future, Olson said generative AI will provide an assist for podcasting and entire video workflow. "As we move forward, we'll train the AI and make sure we tune it because every video is a little bit different," said Olson. "We started with press conferences and are learning."

    The process before and after. Olson said the previous process took about 45 minutes to an hour to create a video to publishing and streaming to various channels. The video processing and publishing process had 60 steps. "We're now in a situation where once the video is submitted it's published in about 10 minutes in a worst-case scenario," said Olson. "That includes things like translating and providing a summary that would have taken us hours before."

    Returns on investment. Olson said the initial return is time saved that frees his team up to think of new types of content to create. "We'd like our people thinking of new types of content not necessarily pushing buttons to publish something," said Olson.

    By the end of the season, Olson expects save hours "so that our content creators can focus on creating other things for our fans and exposing that content."

    Another early return is that the Seahawks can provide more in-depth information with generative AI summaries that can give fans more opportunities to discover content in unique ways. "We're also excited to see how our search engine referrals and various components are improved by providing more rich metadata," said Olson.

    Longer term, Olson said generative AI can boost the returns of the Seahawks video archive, which will be critical since the franchise will soon enter its 50th season. Olson said:

    "We have done a good amount of work over the past couple of years to take old Betamax tapes off the shelf and digitize those. We don't have a lot of real good data on all those, and so we're working with AWS right now to figure out how to process them and get a lot more data about who's in the video and what did they talked about. In the future, we could say here's a Jim Zorn video of him talking about something and do it within seconds. Today that would be a lot of manual scrubbing. As we move forward, there will be opportunities to talk about our history."

    More from the genAI field:

    Model choices. Olson said the plan from the beginning was to test multiple models and analyze them based on quality of output without human intervention. The Seahawks have already swapped a few models based on use cases. Olson noted that his team has swapped models out as the company moved from pilot to production. "It definitely took us some tinkering to understand what model makes sense and which doesn't. The tremendous thing about Bedrock is that we can use many different models," he said. "When we built this process, we knew these models are all changing. The model we're using now is really great, but for all we know there's some model in six or seven months that we'll want to move to."

    Humans in the loop. Olson said the primary goal of the genAI project was to focus his team on more content and new ideas. The process for the video team is to bookend video production with human oversight. At the end of the process, humans make the quality checks and decide to publish, but the models have gotten to the point where "we're hitting publish more than having to make edits," said Olson.

    Olson's team also had to give models unique spellings of names as well as new players as roster changes are daily and weekly. "We really work on ingesting our roster before every video to make sure the latest players are there," he said. Today, the models get an update every time there's a roster change.

    What's more intriguing to Olson is the human input at the front end of the process. He said that generative AI speeds up ideation and allows creators to try new things in seconds and iterate from there.

    The project timeline. Olson said the Seahawks started on the genAI project in the late spring with building components. By time, the season started the Seahawks were ready to go. "It took us about two months of adding pieces and iterating to make sure we could move forward," said Olson. "It was more about adjusting the model to fit our needs and making sure we use it in the correct way."

    Data to Decisions Future of Work Marketing Transformation Next-Generation Customer Experience New C-Suite Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    OpenAI, AI Sustainability, CX Optimization | ConstellationTV Episode 90

    We made it to ConstellationTV episode 90! 📺 Hear co-hosts Holger Mueller and Liz Miller discuss enterprise technology news, including AI Forum highlights, AI integration in workforce management, and the impact of OpenAI's recent funding and future innovation.

    Then R "Ray" Wang has an engaging conversation with Sol Salinas, EVP, and Sustainability Lead for Capgemini Americas on the role of sustainability, climate tech, and AI in addressing environmental challenges.

    The episode concludes with a CR CX Convo with leaders from The Scotts Miracle-Gro Company on the importance of empathy in customer experience.

    00:00 - Meet the hosts
    01:24 - Enterprise tech news updates (AIF 2024, workforce management AI, OpenAI)
    17:11 - Sustainability and AI with Capgemini's Sol Salinas
    29:40 - CR CX Convo: Tests CX to Optimize for Extraordinary Growth
    42:13 - Bloopers!

    ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

    On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/1qA-GqW07bw?si=MPt_C27lsaw_Tigp" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

    How Seattle Seahawks are approaching genAI, model choices with AWS | Constellation Insights Interview

    The Seattle Seahawks are revolutionizing their video content distribution with the help of #AWS and #generativeAI. In a recent interview, Kenton Olson, VP of Digital and Emerging Media for the Seahawks, shared insights into their innovative project with Larry Dignan, Editor in Chief of Constellation Insights

    A few highlights include...

    📌 The team produces over 1,000 videos per year, creating a time-consuming distribution challenge. By integrating AWS services like Media Convert, Transcribe, and Bedrock, they've reduced publishing time from 30-45 minutes to just 10 minutes.
    📌 Generative AI allows the Seahawks to automatically generate video summaries, translations, and metadata - saving their content team valuable time to focus on creating new, engaging content.
    📌 The AI-powered system has improved the searchability and fan discoverability of Seahawks videos by providing richer metadata. This enhances the team's ability to reach and connect with their passionate fanbase.
    📌 The flexibility of AWS Bedrock enables the Seahawks to easily test and swap AI models, ensuring they can adapt to the latest advancements in generative AI technology.

    This example showcases how sports organizations can leverage the power of #cloud and #AI to streamline operations, enhance fan engagement, and unlock new content opportunities. The Seahawks are leading the way in transforming the digital fan experience. #AWS #AI #SportsTech #SeattleSeahawks #DigitalTransformation

    On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/z4b_JPK-K_k?si=LoRcqsXxNPKmBKXm" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

    Zoom launches AI Companion 2.0, eyes enterprise, industry expansion

    Zoom outlined its roadmap and upcoming products that include AI Companion 2.0 across its platform, a focus on work management for frontline workers and a deeper dive into contact center, education and healthcare markets.

    In a briefing, Smita Hashim, Chief Product Officer at Zoom, said the company's AI Companion effort initially revolved around the theme of meet happy by adding tools for engagement, becoming more productive and collaboration. With AI Companion 2.0, Zoom is expanding its view more toward work happy.

    "We have expanded our vision to work happy. And what work happy really means for us from our perspective is helping our customers and users have the time to really be what is uniquely human to them," said Hashim, who noted that AI Companion is now activated on more than 4 million accounts and 57% of the Fortune 500. "AI Companion is going to be your personal assistant that can work across Zoom Workplace."

    This vision will be outlined at the company's Zoomtopia conference, which will feature CEO Eric Yuan's keynote as well as customer sessions. The upshot with Zoom's product launches and roadmap, which lands in the fourth quarter of 2024 and extends into 2025, is that the company is gunning for Microsoft Teams. The challenge for Zoom will be overcoming AI assistant fatigue--every enterprise application has one--and breaking through the Microsoft Bundle.

    Hashim noted that AI agent, companion, copilot fatigue directly.

    "Sprawl is a challenge for our users and frankly some of the implementations are confusing. Why do I need an agent just for SharePoint? I don't want hundreds and thousands of agents running around in my user interface. With AI Companion 2.0 we see it as a super-agent that will have skills to connect to Workday, Jira and various workflows to help you get more done."

    Constellation Research analyst Holger Mueller said:

    "Zoom has made massive progress in the last year to position itself beyond just synchronous communication, making it more and more an alternative to the omnipresent Teams. The question will be – can Zoom overcome the power of the Microsoft enterprise agreement with innovation? It certainly has some key capabilities here, especially on the AI side – where it makes AI insights and automation easier accessible than other products, as well as offering a single AI assistant with Zoom AI Companion. On the new offering side, the Frontline Worker offering has a lot of potential, Zoom got the capability mix right, now it has to get the price point right. Overall Zoom has the ability to change the future of work – again."

    Zoom's announcements at Zoomtopia include:

    Zoom AI Companion 2.0, available October 2025. Zoom AI Companion 2.0 works across the Zoom Workplace platform adds context, synthesizes data from meetings, chats and docs as well as Microsoft Outlook, Office, Gmail and other applications, and can take action.

    • Takeaway: AI Companion 2.0 is Zoom's horizontal AI and agent play. By connecting to other enterprise data repositories Zoom is betting that it can be the lead AI collaboration tool. The fact that Zoom doesn't charge extra for AI Companion is a big selling point for adoption.

    Custom AI Companion add-on for Zoom Workplace, which enables enterprises to customize AI Companion and connect it to business apps and data sets. This customization ability is expected in the first half of 2025 at $12 per user per month. AI Companion is typically included in Zoom Workspace without an add-on fee.

    • Takeaway: Hashim said the ability to customize was one of the big customer requests. The move to personalize AI Companion is a natural progression. "Our customers and their organizations are unique. They work across different applications, different data sources. Their employees are unique. They have different goals, they have different ambitions and customers have been asking us how AI companion could expand to address all of those needs," she said.

    Zoom AI Studio, which will customize AI Companion with connectors to knowledge bases, fine tuning and AI skills.

    • Takeaway: Custom AI Companion add-on and AI Studio will enable AI Companion to move across workflow applications and take actions. This ability will also enable Zoom to provide personal employee coaching and avatars.

    Zoom Tasks, which will use AI Companion to detect tasks across Workplace and sync updates.

    AI Companion for Workvivo for employee engagement as well as a listening suite to gauge employee sentiment. Will Meta's Workplace shutdown be a boon for Zoom's Workvivo?

    Contact center enhancements such as dynamic agent guides, suggested answer and supervisor tools.

    Industry enhancements with AI Companion for Educators including lesson plans, Zoom for Healthcare including Zoom Workplace for Clinicians, and Zoom Workplace for Frontline, which is aimed at workers in the field. Frontline workers were a big focus for Facebook Workspace, which is winding down and moving customers to Zoom Workvivo.

    • Takeaway: Zoom's education efforts are notable since they alleviate hybrid learning pain points and connectors to Canvas and other common education applications are a win. In addition, Zoom Workspace Clinicians is a nice way to leverage the platform overall. Zoom's focus on healthcare looks to build its 36% telemedicine market share. Zoom said more than 140,000 healthcare institutions use Zoom. Zoom Workplace for Frontline is also an area with a lot of white space for the company.

    Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience New C-Suite Sales Marketing Tech Optimization Digital Safety, Privacy & Cybersecurity zoom AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer