Results

Workato enters agentic AI orchestration fray with Workato One, buys DeepConverse

Workato enters agentic AI orchestration fray with Workato One, buys DeepConverse

Workato launched Workato One, a platform designed for agentic AI workflows, AgentX Apps, pre-built agents that orchestrate business processes across multiple systems, and acquired DeepConverse, which specializes in automated customer support.

The news, announced at Workato's Work to the Power of AI event in New York, also included support for Model Context Protocol (MCP). Workato's support for MCP adds to a growing list supporting Anthropic's AI agent standard.

Workato One is a stack that revolves around orchestration of enterprise data, apps and processes as well as managing AI agents. Workato becomes the latest vendor to enter the agentic AI orchestration ring. There is no shortage of AI agent platforms. Oracle launched AI Agent Studio to create and manage AI agents. ServiceNow's latest release of its Now Platform has a bevy of tools to connect agents and orchestrate them. Boomi launched AI Studio. Kore.ai launched its AI agent platform, and eyes orchestration. Zoom evolved AI Companion with agentic AI features and plans to connect to other agents. Salesforce obviously has Agentforce.

Here's how Workato One breaks down:

  • Workato Orchestrate focuses on integrating and orchestrating data, applications, processes and user experiences and couples it with enterprise context and multi-step skills.
  • Workato Agentic is an extension to Workato Orchestrate that builds and manages AI agents.

Holger Mueller, the Constellation Research analyst covering Workato, said the company's move highlights "a strategy, plan, and product that unites the world of AI and orchestration and unlocks the agentic enterprise."

Workato One, which will be integrated with Amazon Bedrock via a partnership with AWS, includes the following:

  • Agent Studio to build, deploy and manage multiple AI agents of all varieties.
  • Agent Hub to create workflows for AI agents.
  • Agent Acumen, which aggregates insights from multiple systems and data.
  • Agent Trust, which includes security and governance across agents and processes.
  • Agent Orchestrator to orchestrate AI agents from CRM, ER, HR, IT and finance as well as custom agents.
  • AIRO, or AI-driven, Intent-based, Real-time Orchestrator, to understand business problems and intent.
  • MCP, which provides standardized access to pre-built enterprise skills through Anthropic Claude.

In addition, Workato launched AgentX Apps to integrate with various functions. AgentX Apps launch with availability for AgentX Sales, AgentX Support, AgentX IT, and AgentX CPQ.

Separately, Workato said it acquired DeepConverse, which was founded in 2016. DeepConverse will add expertise in search AI and AI support agents to go along with Workato's orchestration platform.

Terms of the deal weren't disclosed.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI's support puts MCP in pole position as agentic AI standard

OpenAI's support puts MCP in pole position as agentic AI standard

OpenAI's support of Anthropic's Model Context Protocol (MCP) may be the start of easier interoperability among AI agents.

The large language model (LLM) giant announced its support for MCP for the OpenAI Agents Software Development Kit (SDK) with plans to enable it for the OpenAI API and ChatGPT Desktop.

Anthropic open sourced MCP in November 2024 to connect AI assistants to the systems where data lives--content repositories, business applications and enterprise environments. The idea behind MCP is to break down data silos and legacy systems for easier integration across connected systems.

For agentic AI, these data silos can be dealbreakers. To date, AI agents are typically pitched by vendors within a specific platform. These vendor visions typically put their own platforms at the center of the enterprise universe, but the reality is that agentic AI will need to traverse multiple systems and platforms. What was missing was a standard to enable AI agents to communicate and negotiate.

Perhaps, OpenAI's support will make MCP standard. What remains to be seen is whether the hyperscale cloud providers and SaaS giants get behind MCP. Given Anthropic, OpenAI and Microsoft are supporting MCP it's likely others will have to follow or create dueling standards for AI agent connections.

On X, OpenAI CEO Sam Altman announced the MCP support. MCP also was just updated with an authorization framework based on OAuth 2.1, streamable HTTP transport and support for JSON-RPC batching. Microsoft is also supporting MCP and launched a new Playwright-MCP server that enables AI agents to browse the web and interact with sites.

Box CEO Aaron Levie, who has been talking AI system interoperability, on X and LinkedIn said the OpenAI MCP support is critical for coordinating across platforms. "As AI Agents from multiple platforms coordinate work, AI interoperability is going to be critical," said Levie.

Constellation Research analyst Holger Mueller said:

"The question of 2025 will be - will the LLM be in charge of its enterprise tooling, or will enterprise software vendors build their own pre-director to go to LLM vs. more traditional deterministic algorithms. The latter will give the vendor (and this the customer) more control & there former is easier from an R&D perspective, but a vendor chosing the LLM route will have invest in seeding tools for multiple LLMs. MCP may become  the standard to simplify this issue."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Databricks forges partnership with Anthropic, adds innovative system to enhance open source LLMs

Databricks forges partnership with Anthropic, adds innovative system to enhance open source LLMs

Databricks inked a five-year partnership with Anthropic to offer Claude models directly through the Databricks Data Intelligence Platform. Databricks also highlighted a system to enhance large language model performance without requiring label data.

With the Anthropic deal, Databricks will be able to add Claude 3.7 Sonnet, Anthropic's latest LLM, natively to its platform. Databricks said the Anthropic's models can be paired with its own Databricks Mosaic AI models.

The Anthropic models are available on Databricks on AWS, Azure and Google Cloud.

Databricks' deal highlights how data platforms are increasingly looking to add top shelf models. For instance, Snowflake announced a partnership to add OpenAI's ChatGPT to its platformThe data platform space has seen a flurry of deals and partnerships. SAP and Databricks paired up on SAP Business Cloud. IBM acquired DataStax to add to its watsonx platform. Salesforce and Google Cloud also expanded a partnership that includes Data Cloud.

According to Databricks, the plan is to enable Anthropic models to "reason over their enterprise data." Databricks Mosaic AI has the tools to build domain-specific AI agents on unique data. The hope for Databricks is that enterprises will pair up Anthropic and Mosaic AI.

What remains to be seen is how many Databricks customers are already leveraging Anthropic models via AWS and Google Cloud already.

Separately, Databricks outlined TAO (Test-time Adaptive Optimization), an approach that enhances LLM performance on a task without labeled data. Real-time compute augments an existing model for tuning. TAO only needs LLM usage data but can surpass traditional fine tuning on labeled examples.

According to Databricks, TAO can enable open-source models outperform proprietary models.

In a blog post, Databricks outlined how TAO improved performance of Llama 3.3 70B by 2.4%. Although TAO may not push Llama over proprietary models in all categories, Databricks does get the model close.

TAO is available in preview and Databricks said it will be embedded in several products in the future.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity databricks Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Quantinuum, partners create true verifiable randomness, eye quantum computing for cybersecurity

Quantinuum, partners create true verifiable randomness, eye quantum computing for cybersecurity

Quantinuum quantum computers have created true verifiable randomness in a project that could be valuable to cybersecurity.

In a paper in Nature, Quantinuum along with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas have generated true randomness critical to cryptography and cybersecurity. Quantinuum said the latest advance was built on research from Shih-Han Hung and Scott Aaronson of the University of Texas at Austin.

Quantinuum's breakthrough is just the latest in the industry to demonstrate commercial relevance. Earlier this month, D-Wave said its quantum computer outperformed a classical supercomputer in solving magnetic materials simulation problems. D-Wave followed up with a quantum blockchain architecture. IonQ and Ansys said they also outperformed classical computing when designing medical devices.

JPMorganChase noted in a blog post:

"Classical computers cannot create true randomness on demand. As a result, to offer true randomness in classical computing, we often resort to specialized hardware that harvests entropy from unpredictable physical sources, for instance, by looking at mouse movements, observing fluctuations in temperature, monitoring the movement of lava lamps or, in extreme cases, detecting cosmic radiation. These measures are unwieldy, difficult to scale and lack rigorous guarantees, limiting our ability to verify whether their outputs are truly random.

Compounding the challenge is the fact that there exists no way to test if a sequence of bits is truly random."

Conversely, quantum computing features randomness and can run verification much faster than a classical computer.

Quantinuum said it will introduce a new product that can generate these "random seeds." Using Quantinuum's H2 System, the company has been able to deliver a proof of concept that bridges quantum computing and security.

The company said it will integrate quantum-certified randomness into its commercial portfolio to go along with its Generative Quantum AI and Helios system as well as the hardware roadmap going forward. See: Quantinuum launches generative AI quantum framework, sees quantum computing as synthetic data generator

For Quantinuum, the true randomness breakthrough could give it a key commercial product for enterprises. Helios is in its testing phase and will be available later in 2025. The system is likely to be initially used as part of a cybersecurity portfolio to create a "quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations." 

"The quantum industry is scrambling to show what practical valid use cases it can operate in 2025 and beyond," said Holger Mueller, analyst at Constellation Research. "It is not clear which use cases will emerge first, but this work by Quantinuum and partners, shows that horizontal use cases, like the generation of randomness – maybe the first practical use case of quantum computing."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Quantum Computing Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Nvidia GTC, NEW Constellation Analyst, AI-Powered CSR | ConstellationTV Episode 101

Nvidia GTC, NEW Constellation Analyst, AI-Powered CSR | ConstellationTV Episode 101

ConstellationTV Episode 101 is here! 📺 Co-hosts Liz Miller and Holger Mueller cover #enterprise news updates, including NVIDIA GTC and Adobe Summit announcements around #AI innovation and agentic solutions.

Next, catch a light-hearted Salon50 interview with Constellation's NEW analyst Michael Ni. You'll get an entertaining introduction to Mike, learn his coverage areas, and hear fun facts about everyone involved!

Finally, R "Ray" Wang interviews IBM's VP & Chief Impact Officer Justina Nixon-Saintil about IBM's mission to use AI for good. This means up-skilling employees, creating personalized learning pathways, and increasing productivity through innovative AI solutions.

Don't forget to watch until the end for bloopers! 👇 

00:00 - Meet the Hosts
0:20 - Enterprise Technology News
14:39 - Meet Constellation VP & Analyst Mike Ni
27:27 - IBM Uses AI for Good
32:00 - Bloopers!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/Zs6HtVXqChE?si=df0RtitsVZamdfsB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Google Gemini vs. OpenAI, DeepSeek vs. Qwen: What we're learning from model wars

Google Gemini vs. OpenAI, DeepSeek vs. Qwen: What we're learning from model wars

Enterprise leaders should be forgiven for nursing a case of whiplash over the latest large language model (LLM) developments. Almost daily, there's some advance that generates headlines. Instead of chasing every little benchmark, here's a crib sheet of what we're learning from the never-ending game of LLM leapfrog.

Google is hitting its stride

With the launch of Gemini 2.5 Pro, an experimental thinking model that performs well and has been enhanced with post-training, it's clear Google DeepMind is firing well.

Let's face it, Google was caught off guard by OpenAI and has been playing catch up. It's safe to say that Google has caught up. Gemini 2.5 innovation will be built into future models from Google and rest assured those capabilities will show up at Google Cloud Next in April.

But Google's approach to models goes further than just one-offs. I've been testing Google's deep research tools on Google Advanced vs. OpenAI's ChatGPT's Deep Research. Both have slightly different twists, but compress research time nicely. Google has the mojo to delight. For instance, an instant audio podcast summarizing a report is a nice touch.

When compared with Anthropic's Claude, Gemini more than holds its own. Simply put, Google is leveraging its strengths when it comes to models.

OpenAI is betting being more human is the way

It isn't a Google Gemini launch without an OpenAI launch. OpenAI announced native image generation in ChatGPT hours after Gemini 2.5 Pro was announced. OpenAI had also launched new ChatGPT voice mode updates to give the model "a more engaging and natural tone."

Those additions fell in the bucket that wasn’t necessarily worth my time, but obscure a broader theme from OpenAI. The company is leaning into emotional intelligence and models that act more human.

The pivot is notable because OpenAI's bet is that if you create models that are more relatable--and perform well--you'll have a stickier product. Strategically, that bet makes a lot of sense. I'm an OpenAI ChatGPT subscriber and have seen nothing from rivals that would entice me to dump my $20 monthly subscription.

What remains to be seen is whether subscribing to LLMs is more like streaming where you have more than one service or it's truly zero sum. I haven't had to answer that question since Google's Gemini is baked into my Pixel 9 purchase for a few more months.

DeepSeek vs. Alibaba's Qwen is flooding the market with inexpensive and very capable models

In the US, the LLM game is a scrum between OpenAI, Anthropic, Meta's Llama, Google Gemini and a bevy of others. China is roaring back with DeepSeek vs. Alibaba's Qwen.

You'd be forgiven for forgetting DeepSeek's launch this week--it was so like 2 days ago. DeepSeek released DeepSeek-V3-0324 under the MIT open source license. Alibaba released Qwen2.5-VL-32B about the same time.

DeepSeek has been mentioned on countless earnings conference calls and its potential impact on AI infrastructure spending. Nvidia CEO Jensen Huang is asked about DeepSeek nearly every few minutes. Huang's argument is that DeepSeek is additive to the industry and the need for AI infrastructure.

"This is an extraordinary moment in time. There's a profound and deep misunderstanding of DeepSeek. It's actually a profound and deeply exciting moment, and incredible the world has moved to towards reasoning. But that is even then, just the tip of the iceberg," said Huang.

While DeepSeek is China's AI front-runner now, I wouldn't count out Qwen by any stretch especially with a distribution channel like Alibaba Cloud. What China's champions really did was move the conversation toward reasoning.

This is a game of mindshare

Chasing headlines almost daily about the latest model advances is a fool's errand. There will be the latest and greatest advance almost daily.

What this drumbeat of advances really highlight is that foundational models are a game of mindshare. Let's play a game: What foundational models are we noticing less? The short answer is Anthropic's Claude, and the second less obvious answer is Grok.

Anthropic is clearly more enterprise than OpenAI and is doing interesting things that align more to corporate use cases. Via partnerships with Amazon Web Services and Google Cloud, Anthropic is well positioned. In the daily news flow, Anthropic is likely to be overlooked with announcements that Claude can now search the web.

Nevertheless, Anthropic has mindshare where it matters--corporations. Anthropic is valued at $61.5 billion and is playing a slightly different game with its economic index and focused approach.

There can only be a few consumer AI platforms and OpenAI is in that pole position to threaten Google.

The most overlooked model award must go to Grok. There isn't a day that goes by where Constellation Research analyst Holger Mueller isn't showing me something Grok delivered that is impressive. Grok 3 features DeepSearch, Think and Edit Image. In my tests, Grok 3 is almost as good if not better than the better-known models.

Grok 3 would probably have more mindshare if it weren't simply overlooked due to Elon Musk's other endeavors.

Enterprise whiplash an issue

CxOs can really waste a lot of time focusing on these new model advances. As these models advance at such a rapid clip, the one thing that's clear is that you'll need an abstraction layer to swap models out. Pick your platforms carefully since every vendor (Salesforce, ServiceNow, Microsoft, SAP to name just a few) want to be your go-to platform for enterprise AI.

Here's where Amazon Web Services' approach with SageMaker and Bedrock make so much sense. Google Cloud also has a lot of model choice as does Microsoft Azure, which is best known for its OpenAI partnership but has diversified nicely. Microsoft launched Researcher and Analyst, two reasoning agents built on OpenAI’s Deep Research model and Microsoft 365 Copilot.

You can also expect data platforms such as Snowflake and DataBricks to be big players in model choice. Add it up and there's only one mantra for enterprise CxOs: Stay focused and don't get locked into one set of models.

All of the enterprise energy should be on orchestrating these models and ultimately the AI agents they'll power.

Commoditization is happening rapidly

The funny part of this model war is that it's unclear how they'll be monetized. Open source models--led by Meta's Llama family--are on par with the proprietary LLMs. Those models are being tailored by companies like Nvidia that'll tweak for enterprise use cases.

DeepSeek is blowing up the financial model and AWS' Nova family is likely to be good enough just like its custom silicon chips are. HuggingFace's trending models tell the tale. DeepSeek, Qwen, Nvidia's new models and Google's Gemma are dominant. But that's just today. One thing is certain: The likely price for models is free.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Enterprises will spend on agentic AI, but perhaps not yet

Enterprises will spend on agentic AI, but perhaps not yet

Agentic AI is dominating headlines and the enterprise technology sector, but the impact on the IT budget remains to be seen. Agentic AI is promising, but it's going to take more prototypes and proof of ROI to get the budgets rolling.

That's a takeaway from Crawford Del Prete, President of IDC, who appeared on DisrupTV. Del Prete was talking about the volatile IT budget dynamics in 2025 and said:

"The challenge I see when I talk to CIOs is when they think about AI and beyond is that they don't know where to start. This is particularly a problem with agents because when they look at agents they say, 'I'm skeptical. I know it's going to require significant human oversight. It's going to hallucinate. So, I can't just let this thing run, but I'm afraid not to run with it. I know I have to invest in it, but my budgets are under pressure.' I think we're going to see maybe a little bit of a lean back in the agent world over the course of the next couple quarters."

These comments about spending on AI agents is notable given that vendors are launching new agents and orchestration engines almost daily. Another day, another AI studio. Meanwhile, vendors are switching to hybrid models that include AI agents and genAI capabilities, but feature consumption models that may create budget volatility.

Here’s the state of agentic AI in a nutshell:

Del Prete was upbeat about AI agents. "You've finally got a technology that can unlock the value of your unstructured data in a very automated way. But I'm seeing customers not knowing how to engage, but afraid not to engage. I think we're going to see that push-pull play out over middle of this year, until people can start getting decent prototypes moving forward," said Del Prete.

Constellation Research CEO Ray "R" Wang noted that there are a lot of vendors adding agents to every marketing sentence available. Wang said agentic AI needs to get to the point where agents become an API with a decisioning engine. Agentic AI is very basic right now, said Wang.

Del Prete noted that oversight remains an issue with AI agents across multiple use cases. "We have a lot of work to do before we can set these things loose and feel like they're as in production," he said. "They're not as easy to put into production as people think based on the media that you see associated with it."

Nevertheless, agentic AI will pull budget. "I haven't found a company yet that's willing to lean back and say, yeah, I'm not going to invest. I don't think this thing is real," said Del Prete.

Gurvinder Sahni, Chief Marketing Officer at Persistent Systems, said the company is betting on agentic AI--a space where integrators that can work across multiple systems and domains can create AI agents that actually work

Wang noted that systems integrators and services companies have done well building agents relative to software vendors because they're used to cutting across departments, functions and business processes.

Sahni said:

"We recently trained about 1,000 plus people on Salesforce Agentforce. The other big part for us is the focus on our own IP and also working with the hyperscalers and the ecosystem to build products and services with them as well."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AI-Powered CSR: IBM's Strategies for Community Engagement & Impact

AI-Powered CSR: IBM's Strategies for Community Engagement & Impact

At #SXSW, Justina Nixon-Saintil, VP & Chief Impact Officer of IBM, shared with R "Ray" Wang how IBM is using #AI to drive real community impact...
 
? 1.6M volunteer hours logged
? 16M people skilled (on track to 30M by 2030)
? AI-powered solutions for climate stress communities
? 30% productivity boost with AI tools like "Ask CSR"

#Technology isn't just about innovation—it's about creating meaningful change. We're proud to see companies like IBM using AI to solve global challenges. 🌍 

On <iframe width="560" height="315" src="https://www.youtube.com/embed/Opudx0nh_SY?si=KzV1iGpuhHMnBJ4N" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

How AI Agents Transform HR and Employee Experiences | IBM Sports Club Interview

How AI Agents Transform HR and Employee Experiences | IBM Sports Club Interview

Constellation founder and analyst R "Ray" Wang talks with IBM's Chief HR Officer Nickle LaMoreaux at the #SXSW IBM Sports Club about #AI agents transforming workplace experiences. Some key points include...

? Agents aren't replacing humans, they're augmenting our capabilities
? Agents offer personalized support for upskilling, career development, and routine tasks
? Agents enable more meaningful human interactions by handling administrative work

IBM's vision for an #HR AI agent 1) helps you explore career paths, 2) supports skill development, 3) streamlines HR processes, and 4) provides hyper-personalized guidance

Watch the full interview to learn more!

On <iframe width="560" height="315" src="https://www.youtube.com/embed/gCnYcCMpt0k?si=LAQq5HGjrWORkRjt" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AI infrastructure spending is upending enterprise financial modeling

AI infrastructure spending is upending enterprise financial modeling

When a technology is evolving as fast as artificial intelligence, CFOs and the finance department struggle to crowbar AI infrastructure investments into traditional depreciation models.

Typically, enterprises depreciate capital spending on IT infrastructure over multiple years. For instance, servers have a useful life of 7 years, according to the IRS. Hyperscale cloud companies tend to tweak the useful life of a server. For instance, Meta in its annual report said certain servers and network assets had a useful life of 5.5 years for fiscal 2025. Amazon now reckons the useful life of some of its servers is now 5 years down from 6 years a year ago. Alphabet depreciates over 6 years.

Here's the catch: 5 years in AI infrastructure is a lifetime. Nvidia now has an annual cadence for its GPUs and provides visibility into the roadmap through 2027. CEO Jensen Huang's bet is that Nvidia customers will continue to invest in the latest and greatest accelerated computing infrastructure because there's value and competitive advantage.

Do companies go with shorter depreciation cycles, lease gear or just provision from the cloud even though operating expenses are already stretched? As Nvidia extends more into the enterprise and industry applications, this financial planning conundrum is going to go mainstream. It's no wonder that Nvidia has teams focused on financial solutions.

With that in mind, Nvidia held a virtual panel at GTC 2025 focused on these accounting issues. What does traditional planning look like when AI is advancing too quickly for typical technology refresh cycles?

Bill Mayo, SVP for research IT at Bristol Myers Squibb, has been investing in AI and machine learning for more than a decade, but AI advances today are moving faster than ever. "The challenge is we've had this probably first and second derivative improvement in in the pace of change that has completely broken financial models up and down the stack," said Mayo.

Richard Zoontjens, lead of the supercomputing center at Eindhoven University of Technology, said "we really need compute to compete in this era and that means financial systems have to support this fast moving world."

Zoontjens noted that the current AI cycle doesn't fit in with typical depreciation schedules and financial models. "If you buy new tools every five years, well you're not competing anymore. After two years, you lose talent and you lose innovation," he said.

The solution is that financial modeling will have to move faster. Mayo noted that Bristol Myers Squibb (BMS) has seen rapid improvement in compute, but there's still not enough to reach its biology vision. "At its core, biology is computation," said Mayo.

Mayo said that if you're using today's tech stack five years from now you're behind. Mayo said BMS hasn't figured out the financial model behind AI investments yet, but did say that its first swing was to move to a four-year depreciation cycle.

Zoontjens said his group opted for a flexible two-year renewal cycle. Flexibility is key. Zoontjens said sometimes his supercomputing center can stretch the system, but sometimes has to upgrade faster.

"The system and the lifecycle management contract that we have now provides that flexibility," said Zoontjens. "It gives us control and better agility to move and stay state of the art."

Mayo said AI investment and modeling has to revolve around the patient population and have the best insights to improve lives. That alignment helps with the costs, but BMS AI infrastructure is an operating expense due to cloud delivery. The problem is that cloud provider demand for AI infrastructure is high. Mayo said:

"We can't afford to buy a new (Nvidia) Super Pod every year and just use it for a year. I can't afford it at an OPEX rate, and frankly, neither can hyperscalers afford to buy enough to make it available fast enough that we can all consume that way."

The current situation may indicate that the havoc hitting financial models may be transitory, said Mayo, who said on-prem, co-located infrastructure as well as cloud AI services are all in the mix. "The fact of matter is, I'm buying through a time window that maybe three years from now, the financing problem might have solved itself, but TBD on that," said Mayo.

Here are a few themes on financing AI infrastructure from the IT buyers and Nvidia's Timonthy Shockley, global sales at Nvidia Financial Solutions, and Ingemar Lanevi, head of Nvidia Capital:

  • Plan for data center investments that incorporate power and space savings. Nvidia systems have needed less space with each new system.
  • Plan for more agile upgrade cycles to maintain capacity to compete in industries.
  • There's no right answer that covers all the financial bases so there will be a mix of cloud and on-prem decisions to be made.
  • Long-term depreciation will be an issue for the foreseeable future.
  • Long-term cloud contracts and leases can be a challenge.
  • Cross-functional teams will have to make financing decisions based on what needs to be achieved now and then where things will move later.
  • Leasing models may make sense for AI infrastructure at the moment for cash flow purposes and building in upgrades.
  • It's possible that a secondary market for AI infrastructure emerges for what Mayo called "gently used Super Pods." The accelerated computing market is young relative to CPUs so a secondary market may take time.
  • Enterprises may look to monetize remaining residual value of AI infrastructure when it's not helpful to the buyer anymore.
  • Segment investments for what needs to be cutting edge and adjust the financial model accordingly. Non-cutting edge tech investments can be depreciated over a longer period.
  • Today's AI infrastructure spend is governed by financial systems, but may have to flip in the future to account for product cycles.

Mayo added the disconnect between financial planning and the AI opportunity is just a place in time.

"It's going to get solved. There's a right answer for the use case or the situation you're grappling with right now. Maybe it's a funding model, maybe it's a cash constraint. As long as we're open to try whatever, we're going to solve this problem, and then we're going to use this solution to solve all the other problems."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Financial Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer