Results

CoreWeave's IPO: What you need to know

CoreWeave filed to go public and it's unclear whether the company's debut on the stock market will be a signal of an AI infrastructure top or just a milepost on a multi-year buildout.

One thing is clear: CoreWeave has crazy revenue growth, depends on two customers and is losing hefty sums.

Welcome to the AI infrastructure boom of 2025 (or is that bust?). The feds, OpenAI and others launch Stargate to invest in AI data centers in the US. TSMC is building in the US as geopolitics and AI infrastructure comingle. Capital spending from the likes of Microsoft, Meta, Alphabet and AWS continues to ramp. There's a fancy new large language model (LLM) daily. And Nvidia puts up record growth that Wall Street is taking for granted. Even bitcoin miners are in on the AI infrastructure boom.

This AI infrastructure boom will work well...until it doesn't. DeepSeek spurred fears that may mean we won't need all of this AI infrastructure. Don't worry though. All the big spenders assure us that the capacity is worth it. Erring on the side of not spending a gazillion dollars on AI infrastructure is the real mistake. We're in the FOMO round for GPUs and AI data centers.

With that backdrop, here's what you need to know about CoreWeave, which will be an IPO worth watching simply for AI infrastructure sentiment.

What is CoreWeave? CoreWeave is an AI infrastructure specialist. The company is in the right place at the right time with AI infrastructure. CoreWeave has more than 250,000 GPUs online, 1.3 gigawatts of contracted power, 32 data centers and $15.1 billion in 2024 remaining performance obligations.

The offering, a rocky start and a lower price. CoreWeave initially said it will trade on the Nasdaq under the ticker "CRWV." it is offering 47,178,660 Class A shares. Class A common stock will price between $47 and $55 a share. 

And then things got rocky for CoreWeave. CoreWeave leading up to its IPO became a referendum of AI infrastructure spending and the company wound up cutting its offering and the price. Ultimately, CoreWeave offered 37.5 million shares, down from 49 million shares, priced at $40. That $40 price per share was down from the initial expectation of $55. 

The issue? The Financial Times reported that CoreWeave breached some terms of its $7.6 billion loan in 2024 and triggered defaults. Blackstone amended terms and waived the defaults. In addition, there are signs that big AI data center spenders may be pulling back on aggressive expansion plans. However, CoreWeave remains the biggest US tech IPO since 2021 with plans to raise $1.5 billion. 

The AI stack. CoreWeave's stack of services is designed for AI workloads. CoreWeave's Cloud Platform is designed for uptime and reducing friction of engineering, assembling, running and monitoring infrastructure for AI workloads. CoreWeave has a Nvidia H100 Tensor Core GPU cluster with Nvidia Blackwell coming online. The infrastructure is designed for training as well as inference. "This market is not all about the big cloud vendors in the AI era, but also about smaller vendors in a good position with alternate offerings," said Constellation Research analyst Holger Mueller. "Smaller vendors usually try to win CxOs over due to the simplicity of their offering. A good example is CoreWeave, specializing on GPUs. We will see how it does commercially when it goes public."

The company said:

"We were among the first to deliver NVIDIA H100, H200, and GH200 clusters into production at AI scale, and the first cloud provider to make NVIDIA GB200 NVL72-based instances generally available. We are able to deploy the newest chips in our infrastructure and provide the compute capacity to customers in as little as two weeks from receipt from our OEM partners such as Dell and Super Micro."

The stack looks like this:

CoreWeave's mission is to utilize compute more efficiently for model training and inference. "We believe the AI revolution requires a cloud that is performant, efficient, resilient, and purpose-built for AI," the company said.

CoreWeave will use acquisitions to build out its stack. The company announced the acquisition of Weights & Biases, a major player in the MLOps and LLMOps ecosystem. The deal will give CoreWeave the ability to manage machine learning and model operations. 

"Our combined capabilities will help you get real-time model performance monitoring and robust orchestration, providing you with a powerful AI application development workflow which can accelerate time to production and get your AI innovations to market even faster," said CoreWeave.   

Mueller said:

"CoreWeave management has understood that CxOs want to buy complete solutions and forays into AI application development and related operations. The result is a turnkey cloud that allows to build and operate AI powered next generation application and in one offering." 

Customers. In its SEC filing, CoreWeave noted that its customers include IBM, Meta, Microsoft, Mistral and Nvidia. The company also announced a multi-year deal with OpenAI. CoreWeave will provide AI infrastructure to OpenAI in a contract valued at $11.9 billion. OpenAI will become an investor in CoreWeave via the issuance of $350 million of CoreWeave stock.

Services offered. CoreWeave offers infrastructure cloud services but also has managed software, application software services as well as its Mission Control and observability software.

The debt funding growth. CoreWeave has financed its expansion with debt--$12.9 billion through Dec. 31, 2024, to be exact. That debt is backed by its assets and multi-year committed contracts. RPO was $15.1 billion at the end of 2024, up 53% from a year ago.

Blackstone and Magnetar funded CoreWeave's most recent private debt.

CoreWeave's revenue growth is stunning. In 2022, the company had revenue of $16 million. In 2023, revenue was $229 million. And by 2024, revenue surged to $1.9 billion, up 737% from a year ago. Net losses also surged. In 2024, CoreWeave had a net loss of $863 million and $65 million "adjusted."

Expansion. CoreWeave's plan is to capture more workloads from existing customers, extend into new industries, land enterprise customers and grow internationally. CoreWeave also plans to maximize the economic life of infrastructure. Judging from hyperscalers simply extending the useful life of servers can dramatically boost earnings. CoreWeave segments customers based on AI natives and enterprises.

The risks. CoreWeave's biggest risk is that 77% of its revenue comes from its top two customers. Microsoft was 62% of revenue in 2024. A deal with OpenAI means CoreWeave will likely be dependent on its three top customers. The company said:

"Any negative changes in demand from Microsoft, in Microsoft’s ability or willingness to perform under its contracts with us, in laws or regulations applicable to Microsoft or the regions in which it operates, or in our broader strategic relationship with Microsoft would adversely affect our business, operating results, financial condition, and future prospects.

We anticipate that we will continue to derive a significant portion of our revenue from a limited number of customers for the foreseeable future."

The good news is that CoreWeave's future revenue from OpenAI will bring Microsoft down to less than 50% of revenue. "Microsoft, our largest customer for the years ended December 31, 2023 and 2024, will represent less than 50% of our expected future committed contract revenues when combining our RPO balance of $15.1 billion as of December 31, 2024 and up to $11.55 billion of future revenue from our recently signed Master Services Agreement with OpenAI," the company said. 

Other risk factors include the reality that CoreWeave depends on Nvidia GPU supply, its debt load and access to power. CoreWeave also faces competition from hyperscale cloud players including AWS, Google Cloud, Microsoft Azure and Oracle as well as focused AI cloud providers such as Crusoe and Lambda.

 

 

 

 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Workato enters agentic AI orchestration fray with Workato One, buys DeepConverse

Workato launched Workato One, a platform designed for agentic AI workflows, AgentX Apps, pre-built agents that orchestrate business processes across multiple systems, and acquired DeepConverse, which specializes in automated customer support.

The news, announced at Workato's Work to the Power of AI event in New York, also included support for Model Context Protocol (MCP). Workato's support for MCP adds to a growing list supporting Anthropic's AI agent standard.

Workato One is a stack that revolves around orchestration of enterprise data, apps and processes as well as managing AI agents. Workato becomes the latest vendor to enter the agentic AI orchestration ring. There is no shortage of AI agent platforms. Oracle launched AI Agent Studio to create and manage AI agents. ServiceNow's latest release of its Now Platform has a bevy of tools to connect agents and orchestrate them. Boomi launched AI Studio. Kore.ai launched its AI agent platform, and eyes orchestration. Zoom evolved AI Companion with agentic AI features and plans to connect to other agents. Salesforce obviously has Agentforce.

Here's how Workato One breaks down:

  • Workato Orchestrate focuses on integrating and orchestrating data, applications, processes and user experiences and couples it with enterprise context and multi-step skills.
  • Workato Agentic is an extension to Workato Orchestrate that builds and manages AI agents.

Holger Mueller, the Constellation Research analyst covering Workato, said the company's move highlights "a strategy, plan, and product that unites the world of AI and orchestration and unlocks the agentic enterprise."

Workato One, which will be integrated with Amazon Bedrock via a partnership with AWS, includes the following:

  • Agent Studio to build, deploy and manage multiple AI agents of all varieties.
  • Agent Hub to create workflows for AI agents.
  • Agent Acumen, which aggregates insights from multiple systems and data.
  • Agent Trust, which includes security and governance across agents and processes.
  • Agent Orchestrator to orchestrate AI agents from CRM, ER, HR, IT and finance as well as custom agents.
  • AIRO, or AI-driven, Intent-based, Real-time Orchestrator, to understand business problems and intent.
  • MCP, which provides standardized access to pre-built enterprise skills through Anthropic Claude.

In addition, Workato launched AgentX Apps to integrate with various functions. AgentX Apps launch with availability for AgentX Sales, AgentX Support, AgentX IT, and AgentX CPQ.

Separately, Workato said it acquired DeepConverse, which was founded in 2016. DeepConverse will add expertise in search AI and AI support agents to go along with Workato's orchestration platform.

Terms of the deal weren't disclosed.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI's support puts MCP in pole position as agentic AI standard

OpenAI's support of Anthropic's Model Context Protocol (MCP) may be the start of easier interoperability among AI agents.

The large language model (LLM) giant announced its support for MCP for the OpenAI Agents Software Development Kit (SDK) with plans to enable it for the OpenAI API and ChatGPT Desktop.

Anthropic open sourced MCP in November 2024 to connect AI assistants to the systems where data lives--content repositories, business applications and enterprise environments. The idea behind MCP is to break down data silos and legacy systems for easier integration across connected systems.

For agentic AI, these data silos can be dealbreakers. To date, AI agents are typically pitched by vendors within a specific platform. These vendor visions typically put their own platforms at the center of the enterprise universe, but the reality is that agentic AI will need to traverse multiple systems and platforms. What was missing was a standard to enable AI agents to communicate and negotiate.

Perhaps, OpenAI's support will make MCP standard. What remains to be seen is whether the hyperscale cloud providers and SaaS giants get behind MCP. Given Anthropic, OpenAI and Microsoft are supporting MCP it's likely others will have to follow or create dueling standards for AI agent connections.

On X, OpenAI CEO Sam Altman announced the MCP support. MCP also was just updated with an authorization framework based on OAuth 2.1, streamable HTTP transport and support for JSON-RPC batching. Microsoft is also supporting MCP and launched a new Playwright-MCP server that enables AI agents to browse the web and interact with sites.

Box CEO Aaron Levie, who has been talking AI system interoperability, on X and LinkedIn said the OpenAI MCP support is critical for coordinating across platforms. "As AI Agents from multiple platforms coordinate work, AI interoperability is going to be critical," said Levie.

Constellation Research analyst Holger Mueller said:

"The question of 2025 will be - will the LLM be in charge of its enterprise tooling, or will enterprise software vendors build their own pre-director to go to LLM vs. more traditional deterministic algorithms. The latter will give the vendor (and this the customer) more control & there former is easier from an R&D perspective, but a vendor chosing the LLM route will have invest in seeding tools for multiple LLMs. MCP may become  the standard to simplify this issue."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Databricks forges partnership with Anthropic, adds innovative system to enhance open source LLMs

Databricks inked a five-year partnership with Anthropic to offer Claude models directly through the Databricks Data Intelligence Platform. Databricks also highlighted a system to enhance large language model performance without requiring label data.

With the Anthropic deal, Databricks will be able to add Claude 3.7 Sonnet, Anthropic's latest LLM, natively to its platform. Databricks said the Anthropic's models can be paired with its own Databricks Mosaic AI models.

The Anthropic models are available on Databricks on AWS, Azure and Google Cloud.

Databricks' deal highlights how data platforms are increasingly looking to add top shelf models. For instance, Snowflake announced a partnership to add OpenAI's ChatGPT to its platformThe data platform space has seen a flurry of deals and partnerships. SAP and Databricks paired up on SAP Business Cloud. IBM acquired DataStax to add to its watsonx platform. Salesforce and Google Cloud also expanded a partnership that includes Data Cloud.

According to Databricks, the plan is to enable Anthropic models to "reason over their enterprise data." Databricks Mosaic AI has the tools to build domain-specific AI agents on unique data. The hope for Databricks is that enterprises will pair up Anthropic and Mosaic AI.

What remains to be seen is how many Databricks customers are already leveraging Anthropic models via AWS and Google Cloud already.

Separately, Databricks outlined TAO (Test-time Adaptive Optimization), an approach that enhances LLM performance on a task without labeled data. Real-time compute augments an existing model for tuning. TAO only needs LLM usage data but can surpass traditional fine tuning on labeled examples.

According to Databricks, TAO can enable open-source models outperform proprietary models.

In a blog post, Databricks outlined how TAO improved performance of Llama 3.3 70B by 2.4%. Although TAO may not push Llama over proprietary models in all categories, Databricks does get the model close.

TAO is available in preview and Databricks said it will be embedded in several products in the future.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity databricks Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Quantinuum, partners create true verifiable randomness, eye quantum computing for cybersecurity

Quantinuum quantum computers have created true verifiable randomness in a project that could be valuable to cybersecurity.

In a paper in Nature, Quantinuum along with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas have generated true randomness critical to cryptography and cybersecurity. Quantinuum said the latest advance was built on research from Shih-Han Hung and Scott Aaronson of the University of Texas at Austin.

Quantinuum's breakthrough is just the latest in the industry to demonstrate commercial relevance. Earlier this month, D-Wave said its quantum computer outperformed a classical supercomputer in solving magnetic materials simulation problems. D-Wave followed up with a quantum blockchain architecture. IonQ and Ansys said they also outperformed classical computing when designing medical devices.

JPMorganChase noted in a blog post:

"Classical computers cannot create true randomness on demand. As a result, to offer true randomness in classical computing, we often resort to specialized hardware that harvests entropy from unpredictable physical sources, for instance, by looking at mouse movements, observing fluctuations in temperature, monitoring the movement of lava lamps or, in extreme cases, detecting cosmic radiation. These measures are unwieldy, difficult to scale and lack rigorous guarantees, limiting our ability to verify whether their outputs are truly random.

Compounding the challenge is the fact that there exists no way to test if a sequence of bits is truly random."

Conversely, quantum computing features randomness and can run verification much faster than a classical computer.

Quantinuum said it will introduce a new product that can generate these "random seeds." Using Quantinuum's H2 System, the company has been able to deliver a proof of concept that bridges quantum computing and security.

The company said it will integrate quantum-certified randomness into its commercial portfolio to go along with its Generative Quantum AI and Helios system as well as the hardware roadmap going forward. See: Quantinuum launches generative AI quantum framework, sees quantum computing as synthetic data generator

For Quantinuum, the true randomness breakthrough could give it a key commercial product for enterprises. Helios is in its testing phase and will be available later in 2025. The system is likely to be initially used as part of a cybersecurity portfolio to create a "quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations." 

"The quantum industry is scrambling to show what practical valid use cases it can operate in 2025 and beyond," said Holger Mueller, analyst at Constellation Research. "It is not clear which use cases will emerge first, but this work by Quantinuum and partners, shows that horizontal use cases, like the generation of randomness – maybe the first practical use case of quantum computing."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Quantum Computing Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Nvidia GTC, NEW Constellation Analyst, AI-Powered CSR | ConstellationTV Episode 101

ConstellationTV Episode 101 is here! 📺 Co-hosts Liz Miller and Holger Mueller cover #enterprise news updates, including NVIDIA GTC and Adobe Summit announcements around #AI innovation and agentic solutions.

Next, catch a light-hearted Salon50 interview with Constellation's NEW analyst Michael Ni. You'll get an entertaining introduction to Mike, learn his coverage areas, and hear fun facts about everyone involved!

Finally, R "Ray" Wang interviews IBM's VP & Chief Impact Officer Justina Nixon-Saintil about IBM's mission to use AI for good. This means up-skilling employees, creating personalized learning pathways, and increasing productivity through innovative AI solutions.

Don't forget to watch until the end for bloopers! 👇 

00:00 - Meet the Hosts
0:20 - Enterprise Technology News
14:39 - Meet Constellation VP & Analyst Mike Ni
27:27 - IBM Uses AI for Good
32:00 - Bloopers!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/Zs6HtVXqChE?si=df0RtitsVZamdfsB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Google Gemini vs. OpenAI, DeepSeek vs. Qwen: What we're learning from model wars

Enterprise leaders should be forgiven for nursing a case of whiplash over the latest large language model (LLM) developments. Almost daily, there's some advance that generates headlines. Instead of chasing every little benchmark, here's a crib sheet of what we're learning from the never-ending game of LLM leapfrog.

Google is hitting its stride

With the launch of Gemini 2.5 Pro, an experimental thinking model that performs well and has been enhanced with post-training, it's clear Google DeepMind is firing well.

Let's face it, Google was caught off guard by OpenAI and has been playing catch up. It's safe to say that Google has caught up. Gemini 2.5 innovation will be built into future models from Google and rest assured those capabilities will show up at Google Cloud Next in April.

But Google's approach to models goes further than just one-offs. I've been testing Google's deep research tools on Google Advanced vs. OpenAI's ChatGPT's Deep Research. Both have slightly different twists, but compress research time nicely. Google has the mojo to delight. For instance, an instant audio podcast summarizing a report is a nice touch.

When compared with Anthropic's Claude, Gemini more than holds its own. Simply put, Google is leveraging its strengths when it comes to models.

OpenAI is betting being more human is the way

It isn't a Google Gemini launch without an OpenAI launch. OpenAI announced native image generation in ChatGPT hours after Gemini 2.5 Pro was announced. OpenAI had also launched new ChatGPT voice mode updates to give the model "a more engaging and natural tone."

Those additions fell in the bucket that wasn’t necessarily worth my time, but obscure a broader theme from OpenAI. The company is leaning into emotional intelligence and models that act more human.

The pivot is notable because OpenAI's bet is that if you create models that are more relatable--and perform well--you'll have a stickier product. Strategically, that bet makes a lot of sense. I'm an OpenAI ChatGPT subscriber and have seen nothing from rivals that would entice me to dump my $20 monthly subscription.

What remains to be seen is whether subscribing to LLMs is more like streaming where you have more than one service or it's truly zero sum. I haven't had to answer that question since Google's Gemini is baked into my Pixel 9 purchase for a few more months.

DeepSeek vs. Alibaba's Qwen is flooding the market with inexpensive and very capable models

In the US, the LLM game is a scrum between OpenAI, Anthropic, Meta's Llama, Google Gemini and a bevy of others. China is roaring back with DeepSeek vs. Alibaba's Qwen.

You'd be forgiven for forgetting DeepSeek's launch this week--it was so like 2 days ago. DeepSeek released DeepSeek-V3-0324 under the MIT open source license. Alibaba released Qwen2.5-VL-32B about the same time.

DeepSeek has been mentioned on countless earnings conference calls and its potential impact on AI infrastructure spending. Nvidia CEO Jensen Huang is asked about DeepSeek nearly every few minutes. Huang's argument is that DeepSeek is additive to the industry and the need for AI infrastructure.

"This is an extraordinary moment in time. There's a profound and deep misunderstanding of DeepSeek. It's actually a profound and deeply exciting moment, and incredible the world has moved to towards reasoning. But that is even then, just the tip of the iceberg," said Huang.

While DeepSeek is China's AI front-runner now, I wouldn't count out Qwen by any stretch especially with a distribution channel like Alibaba Cloud. What China's champions really did was move the conversation toward reasoning.

This is a game of mindshare

Chasing headlines almost daily about the latest model advances is a fool's errand. There will be the latest and greatest advance almost daily.

What this drumbeat of advances really highlight is that foundational models are a game of mindshare. Let's play a game: What foundational models are we noticing less? The short answer is Anthropic's Claude, and the second less obvious answer is Grok.

Anthropic is clearly more enterprise than OpenAI and is doing interesting things that align more to corporate use cases. Via partnerships with Amazon Web Services and Google Cloud, Anthropic is well positioned. In the daily news flow, Anthropic is likely to be overlooked with announcements that Claude can now search the web.

Nevertheless, Anthropic has mindshare where it matters--corporations. Anthropic is valued at $61.5 billion and is playing a slightly different game with its economic index and focused approach.

There can only be a few consumer AI platforms and OpenAI is in that pole position to threaten Google.

The most overlooked model award must go to Grok. There isn't a day that goes by where Constellation Research analyst Holger Mueller isn't showing me something Grok delivered that is impressive. Grok 3 features DeepSearch, Think and Edit Image. In my tests, Grok 3 is almost as good if not better than the better-known models.

Grok 3 would probably have more mindshare if it weren't simply overlooked due to Elon Musk's other endeavors.

Enterprise whiplash an issue

CxOs can really waste a lot of time focusing on these new model advances. As these models advance at such a rapid clip, the one thing that's clear is that you'll need an abstraction layer to swap models out. Pick your platforms carefully since every vendor (Salesforce, ServiceNow, Microsoft, SAP to name just a few) want to be your go-to platform for enterprise AI.

Here's where Amazon Web Services' approach with SageMaker and Bedrock make so much sense. Google Cloud also has a lot of model choice as does Microsoft Azure, which is best known for its OpenAI partnership but has diversified nicely. Microsoft launched Researcher and Analyst, two reasoning agents built on OpenAI’s Deep Research model and Microsoft 365 Copilot.

You can also expect data platforms such as Snowflake and DataBricks to be big players in model choice. Add it up and there's only one mantra for enterprise CxOs: Stay focused and don't get locked into one set of models.

All of the enterprise energy should be on orchestrating these models and ultimately the AI agents they'll power.

Commoditization is happening rapidly

The funny part of this model war is that it's unclear how they'll be monetized. Open source models--led by Meta's Llama family--are on par with the proprietary LLMs. Those models are being tailored by companies like Nvidia that'll tweak for enterprise use cases.

DeepSeek is blowing up the financial model and AWS' Nova family is likely to be good enough just like its custom silicon chips are. HuggingFace's trending models tell the tale. DeepSeek, Qwen, Nvidia's new models and Google's Gemma are dominant. But that's just today. One thing is certain: The likely price for models is free.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Enterprises will spend on agentic AI, but perhaps not yet

Agentic AI is dominating headlines and the enterprise technology sector, but the impact on the IT budget remains to be seen. Agentic AI is promising, but it's going to take more prototypes and proof of ROI to get the budgets rolling.

That's a takeaway from Crawford Del Prete, President of IDC, who appeared on DisrupTV. Del Prete was talking about the volatile IT budget dynamics in 2025 and said:

"The challenge I see when I talk to CIOs is when they think about AI and beyond is that they don't know where to start. This is particularly a problem with agents because when they look at agents they say, 'I'm skeptical. I know it's going to require significant human oversight. It's going to hallucinate. So, I can't just let this thing run, but I'm afraid not to run with it. I know I have to invest in it, but my budgets are under pressure.' I think we're going to see maybe a little bit of a lean back in the agent world over the course of the next couple quarters."

These comments about spending on AI agents is notable given that vendors are launching new agents and orchestration engines almost daily. Another day, another AI studio. Meanwhile, vendors are switching to hybrid models that include AI agents and genAI capabilities, but feature consumption models that may create budget volatility.

Here’s the state of agentic AI in a nutshell:

Del Prete was upbeat about AI agents. "You've finally got a technology that can unlock the value of your unstructured data in a very automated way. But I'm seeing customers not knowing how to engage, but afraid not to engage. I think we're going to see that push-pull play out over middle of this year, until people can start getting decent prototypes moving forward," said Del Prete.

Constellation Research CEO Ray "R" Wang noted that there are a lot of vendors adding agents to every marketing sentence available. Wang said agentic AI needs to get to the point where agents become an API with a decisioning engine. Agentic AI is very basic right now, said Wang.

Del Prete noted that oversight remains an issue with AI agents across multiple use cases. "We have a lot of work to do before we can set these things loose and feel like they're as in production," he said. "They're not as easy to put into production as people think based on the media that you see associated with it."

Nevertheless, agentic AI will pull budget. "I haven't found a company yet that's willing to lean back and say, yeah, I'm not going to invest. I don't think this thing is real," said Del Prete.

Gurvinder Sahni, Chief Marketing Officer at Persistent Systems, said the company is betting on agentic AI--a space where integrators that can work across multiple systems and domains can create AI agents that actually work

Wang noted that systems integrators and services companies have done well building agents relative to software vendors because they're used to cutting across departments, functions and business processes.

Sahni said:

"We recently trained about 1,000 plus people on Salesforce Agentforce. The other big part for us is the focus on our own IP and also working with the hyperscalers and the ecosystem to build products and services with them as well."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AI-Powered CSR: IBM's Strategies for Community Engagement & Impact

At #SXSW, Justina Nixon-Saintil, VP & Chief Impact Officer of IBM, shared with R "Ray" Wang how IBM is using #AI to drive real community impact...
 
✨ 1.6M volunteer hours logged
✨ 16M people skilled (on track to 30M by 2030)
✨ AI-powered solutions for climate stress communities
✨ 30% productivity boost with AI tools like "Ask CSR"

#Technology isn't just about innovation—it's about creating meaningful change. We're proud to see companies like IBM using AI to solve global challenges. 🌍 

On <iframe width="560" height="315" src="https://www.youtube.com/embed/Opudx0nh_SY?si=KzV1iGpuhHMnBJ4N" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

How AI Agents Transform HR and Employee Experiences | IBM Sports Club Interview

Constellation founder and analyst R "Ray" Wang talks with IBM's Chief HR Officer Nickle LaMoreaux at the #SXSW IBM Sports Club about #AI agents transforming workplace experiences. Some key points include...

✅ Agents aren't replacing humans, they're augmenting our capabilities
✅ Agents offer personalized support for upskilling, career development, and routine tasks
✅ Agents enable more meaningful human interactions by handling administrative work

IBM's vision for an #HR AI agent 1) helps you explore career paths, 2) supports skill development, 3) streamlines HR processes, and 4) provides hyper-personalized guidance

Watch the full interview to learn more!

On <iframe width="560" height="315" src="https://www.youtube.com/embed/gCnYcCMpt0k?si=LAQq5HGjrWORkRjt" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>