Results

BT150 CxO zeitgeist: AI agents promising, but in transition period

BT150 CxO zeitgeist: AI agents promising, but in transition period

AI agents remain the hot topic among BT150 CxOs and there’s little to no doubt multi-agent systems will be useful. However, there are doubts about vendor offerings, platform lock-in, data strategy, architecture and change management.

With our monthly meeting of BT150 CxOs, multiple agentic AI takes surfaced both about the present and the future. Our CxO call, which is operated under Chatham House rules, highlighted a bevy of takeaways. Here's the breakdown:

The AI bakeoff is underway with a dose of agents. One company is handing its developers the latest AI tools as well as all the enterprise's processes and tasks. The goal: Teams come up with their best stuff and the best projects are ranked.

But that bakeoff isn't necessarily done via a vendor--because agentic AI offerings are seen as too immature at the moment.

2025 BT150 zeitgeist:

Is RPA still good enough? When you think about AI agents at a basic level, there are APIs, decision engines and process automation. The takeaway from CXOs is that in many cases RPA may still be good enough and use the tool that fixes the issue in the most cost effective manner. Simply put, everyone wants to use AI because it's sexy, but it may not be the right tool.

Agentic AI's transitional period. Today, agentic AI offerings are based on platforms and specific functions--sales and marketing, HR and finance. Teams are getting up to speed, but enterprise adoption is cautious. The caution is warranted because enterprises are trying to figure out how much performance is available for the cost, use cases and flexibility. Simply put, we’re in the AI agent vendor announcement phase with evolving customer use cases from early adopters.

Agentic AI should be horizontal by nature and break down silos. CxOs said for agentic AI to pay off it must be an overarching layer above data silos. Think bigger and about AI agents and get beyond silos. Related: OpenAI's support puts MCP in pole position as agentic AI standard

Role of the CIO changing. As agentic AI starts piecing together revenue channels and data owned by different stakeholders, ownership becomes an issue. Will the role of the CIO change from someone managing infrastructure and code to a business leader where everyone is a co-owner of the business. The CIO could be responsible for the semantic layer of the business and that data graph across processes.

The future of UI may be no UI. Packaged applications are basically data stores with UX on top. AI agents promise to take away the UX and replace it with voice or natural language. The big question is whether customers are ready for it. It's unclear whether the real benefit of new agents is building applications and ditching SaaS.

A no UI future will require a lot of change management. Enterprises have acquired so much packaged software that there will be pressure to bring back familiar user interfaces.

CxOs are thinking about building their own platforms that are agentic. Enterprises with agentic AI can thinking through composable architecture, self-generated UIs and low code ways to build their own platforms. The challenge with AI agents is that line of business leaders will simply buy platforms for their spaces, lose the plot and simply recreate SaaS sprawl. As noted previously, integrators may wind up being the most valuable agentic AI players.

Vendors that are all-in on agentic AI may be surprised to hear that customers are thinking AI can be used to develop cross-function internal AI agent platforms . Workato launched Workato One. Oracle launched AI Agent Studio to create and manage AI agents. ServiceNow's latest release of its Now Platform has a bevy of tools to connect agents and orchestrate them. Boomi launched AI Studio. Kore.ai launched its AI agent platform, and eyes orchestration. Zoom evolved AI Companion with agentic AI features and plans to connect to other agents. Salesforce obviously has Agentforce.

Data quality will be everything and fortunately AI can be a big help. For AI agents to really work, data lakehouse infrastructure will be critical to break down silos. The direction of data flow will also change with AI agents so the communication path will matter. Right now, data is often siloed without much housekeeping. The big question is whether enterprises will build their own data lakehouses or leave data in application silos.

CxOs agreed that owning the data stack will unify everything and allow you to control your destiny. Realistically, the best you'll do is have two or three lakehouses due to platforms you’ll need to keep. That tally sounds like a lot, but it's infinitely better than the way some enterprises operate today.

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity BT150 ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Cognizant eyes multiyear plan to 'agentify the enterprise'

Cognizant eyes multiyear plan to 'agentify the enterprise'

Cognizant is betting that it can embed agentic AI across enterprises, expand its total addressable market and drive what it calls "hyper-productivity" for its customers.

The strategy will be driven by the ability to design enterprise AI agents that work across systems, create industry-specific large language models and build its own platforms such as Cognizant Neuro AI, which is built on Nvidia's stack.

At Nvidia's GTC 2025 conference, Cognizant appeared in multiple sessions and highlighted customers such as Trane as well as industry AI efforts in healthcare and manufacturing. The upshot is that Cognizant is looking to enable multi-agent systems across multiple industries and create "AI agent factories."

Speaking at Cognizant's Investor Day, CEO Ravi Kumar said the company has always been interested in finding marketplace gaps, fixing those problems and productizing it with services. "We have now done that for AI, which is evolving at a rapid pace. I call it last mile infrastructure as well," said Kumar. "Every software ecosystem is building agents. We built an orchestration layer, which we believe is a gap today, and can get agents from different ecosystems to talk to each other and generate the kind of productivity clients are looking for."

While IT services companies could be disrupted by agentic AI, there's a strong case to be made that these integrators are best positioned to do well building AI agent systems. Why? These systems integrators work horizontally across business functions, have domain expertise and industry-specific knowledge.

Speaking on DisrupTV, Constellation Research CEO Ray "R" Wang set the scene.

"One of the things we discovered, especially with agents, is that system integrators and services companies have done an amazing job building agents. Part of it is the fact that when you have to cut across departments and functional features and work across business processes. You have to be really good at integration. The agent is really an API tied to a decision engine. Being able to see what a business needs to do across the board really makes it work. This level of strategic AI integration is going to be important."

The plan

Cognizant's plan for going forward revolves around artificial intelligence and embedded engineering.

For AI, Kumar said Cognizant is focused on using AI to enable hyper productivity, industrializing AI and "agentifying the enterprise." Investments in AI include labs, development of its Neuro suite platforms, multi-agent orchestration and frameworks.

Today, Cognizant is enabling hyper productivity with industrializing AI and agentification taking place between 2025 and beyond 2030. "We are unlocking thousands of use cases at the rapid pace, and the models are getting cheaper and cheaper, which means the value will move from the infrastructure to the front, and enabling that hyper productivity for our clients, industrializing AI and agentifying the enterprise," said Kumar, who added that more than 100 pilot programs with customers are focused on agentic AI.

Prasad Sankaran, EVP of software and platform engineering at Cognizant, said enterprises have moved to lower tech debt, but AI is bringing a rapid pace of new technologies. Cognizant's platforms can "satisfy that last mile challenge for our clients," said Sankaran. "Customers can take our platform and directly connect to that last mile connectivity for AI," he said.

Kumar said Cognizant is also very interested in rewiring the tech stack from the data to cloud to user experience. "We want to rewire the experience layer with Cognizant Moment, which is about making the UI generative," said Kumar. "We believe just in time design is the user interface."

For embedded engineering, Cognizant is focused on digital and physical (phygital) product engineering, smart manufacturing and autonomous systems. Investments for embedded engineering include smart mobility labs, industry 4.0, embedded systems, IT and operational technology in manufacturing, and edge computing.

Vibha Rustagi, SVP and global head of Cognizant IoT and engineering, said her group sees a big market in making industries such as medical, manufacturing and retail digital. Cognizant has a digital twin partnership with AWS and Nvidia to "drive optimization in the AI, optimization in the factories, and make it more autonomous."

Rewiring the stack for agentic AI

For Cognizant, setting customers up for agentic AI will require rewiring of enterprise stacks to leverage data and cloud infrastructure.

Nearly 40% of Cognizant's revenue comes from data transformation and cloud work. "Every client is on this journey somewhere," said Naveen Sharma, SVP, global practice head of data, AI and analytics. "Unless you have a secure and scalable digital foundation, you're not really going to build up anything over it."

Sharma added that data and model governance is also required on that cloud stack. "These models are not going to manage themselves," he said.

The platforms to manage those models that will ultimately feed into AI agents falls to Babak Hodjat, Cognizant CTO AI.

Speaking at Cognizant's investor day, Hodjat said as companies move from models to agents there's an engineering process to consider. "That moment of switching from a model to an agent is when we've moved to an engineering discipline," he said. "We have to decide what is the responsibilities of this agent, what are the tools that we're going to give it? Where is it sitting? What kind of microservices is it representing? What kind of data isn't representing, it's engineering. It's customization."

Once there are multiple agents autonomously handling tasks interoperability is everything.

He said:

"We are going to build these agent systems as networks with clients. Build some of these agents custom for their specific use cases. They will be provisioning some of these agents and customizing them from third parties, say Agentforce or Agentspace. Everybody has their agents now, and they are plugging into this multi-agent system that gets progressively more powerful."

Hodjat said at Cognizant every unit is building AI agents and "these agents are begging to be connected to each other."

The promise of AI agents communicating is that they can break down silos of data and tasks and create efficiencies. Hodjat said Cognizant is using Neuro AI internally to orchestrate AI agents.

Through a demo, Hodjat showed how agents representing different apps could communicate. AI agents from different divisions create subnetworks based on functions. For instance, an employee can ask about a life change event like having a baby, and agents identify the down chain agents that are required. Ultimately, processes for taking time off, life change events and other HR processes are initiated.

Ultimately, Hodjat said agents will build agents that keep a human in the loop but can execute on optimized processes. That process could include a scoping agent that is grounded in enterprise data and information and can assign tasks to other agents.

Hodjat said:

"This agentification is something that is happening. It's inevitable. It's organic. It has requirements that need to be fulfilled. It's an ongoing process. It's incremental, unlike past moves to the cloud, which was a big lift and shift. You can do agents incrementally and plug in new agents as you go. It requires an interoperability and there's a lot of engineering and custom work. We've recognized early on that multi-agency is the future of the enterprise."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Humanoid robots near inflection point courtesy of AI

Humanoid robots near inflection point courtesy of AI

AI models are quickly including the physical world and multiple modalities and a humanoid robot inflection point may soon follow.

That's the gist coming out of Nvidia's GTC 2025 conference and recent developments. The combination of foundational AI models that apply to robotics means enterprises need to start thinking through the key concepts. Nvidia CEO Jensen Huang told investors during GTC that "the business opportunity is well upstream of the robot."

Huang said: "Before you have a robot, you have to create the AI for the robot. Before you have a chat bot, you have to create the AI for the chat bot. That chat bot is just the last end of it. And so, in order for us to enable the world's robotics industry, upstream is a bunch of AI infrastructure we have to go create to teach the robot how to be a robot. Now, teaching a robot how to be a robot is much harder than in fact, even chat bots, for obvious reasons, it has to manipulate physical things, and it has to understand the world physically. We have to invent new technologies for that, the amount of data you have to train with is gigantic. It's not words, it's video, it's not world, it's not just words and numbers, it's video and physical interactions, cause and effects, physics and so. So that's the new adventure we've been on for several years, and now it's starting to grow quite fast."

Yes, folks. Nvidia is looking for its next big thing and robotics may be it. Nvidia has launched physical AI world models via Cosmos, which are able to be customized. 1X, Agility Robotics, Figure AI, Foretellix, Skild AI and Uber are adopting Cosmos, a family that now includes Cosmos Transfer, which can ingest video inputs such as segmentation maps, depth maps and lidar scans to create photoreal video outputs. The dream is that models will know the ground truth needed to train robots.

Nvidia also followed up with Nvidia Isaac GR00T N1, a foundation model for generalized humanoid reasoning and skills. In addition, the Nvidia Isaac GR00T Blueprint and the Newton open-source physics engine, which is being developed with Google DeepMind and Disney Research. Nvidia also plans to release Jetson Thor, a computing platform designed to power humanoid robots.

For good measure, Hyundai said it will work with its Boston Dynamics unit to "expand the U.S. ecosystem for robotics components and establish a mass-production system" and partner with Nvidia on AI for robotics. Hyundai's total investment for expanding robotics, AI and autonomous driving in the US is $6 billion. Google DeepMind launched Gemini Robotics, a Gemini 2.0 model designed for robotics. Robotics developments have popped up repeatedly at technology conferences. At AWS re:Invent 2024, Amazon CEO Andy Jassy talked about the 750,000 robots in fulfillment centers that are leveraging generative AI.

The continuum for the future revolves around AI, agentic AI and an ecosystem extending into robotics starting with things like autonomous vehicles and ultimately humanoid robots.

Get Constellation Insights newsletter

Is it too early to start thinking about humanoid robots? Probably not. If you're already thinking through agentic AI and the implications for your company humanoid robots are the next step on the digital labor continuum.

Deloitte sees humanoid robotics ultimately scaling in 7 to 10 years with experiments starting today. A panel at GTC 2025 highlighted the following timeline.

A screenshot of a website

AI-generated content may be incorrect.

That was followed by a range of considerations for planning today and executing in the years ahead.

Although Huang's keynote kicker included a little robot that could react to humans and follow instructions via the AI and models embedded, there are real implications to ponder. You think generative AI was disruptive, robotics will also have a wide impact. Here's a look at what you need to know.

Humanoid robotics are a collection of technologies. Tim Gaus, Principal and Smart Manufacturing Business Leader at Deloitte, noted that humanoid robots will share features of humans, but be powered by AI that makes them more functional.

"It's not just about the robot itself. It actually takes an entire ecosystem to make this come to life," said Gaus, who said there's a robot operating system governing the movement and then the AI that will enable it to be trained and work with other robots. "It's not going to be just one robot or one humanoid that's out there. It's actually the interaction model between classic robots, non-humanoid, humanoid, multi humanoids coming together and the that integrates the entire enterprise itself."

These technologies are all coming together and enabling experimentation ahead of real value and actual use cases for humanoid robots, which will be software defined.

  • The stack will look like this:
  • Robot mechanical form (hardware).
  • Robot operating system (open source and proprietary).
  • Robotics training (Nvidia, OpenAI, Google Deepmind and model providers).
  • Fleet management.
  • Enterprise integration via enterprise technology vendors.

Humanoid robots will collaborate with humans more than replace them. We're already hearing the agentic AI spin on digital labor and how it will make humans more productive. But remember, agentic AI is going to mean that you need to hire fewer people. The thinking in the humanoid robot crowd is that these devices will fill roles humans don't want to do anyway.

Huang said the world is already short of "10s of millions of workers" and that "we need lots of robots."

Gaus noted that humanoid robots are "hitting on one of the most important challenges that we have in this space, which is we just don't have enough people who want to do these types of jobs." Gaus was speaking to manufacturing, logistics and a host of industries today. The future of work will include a lot of robots.

Orchestration will be everything. Tomer Gal, Managing Director, NVIDIA Alliance, AI and Accelerated Computing at Deloitte, said fleet management and orchestration will be a big challenge. Cybersecurity, communication interfaces, orchestration and upgrading thousands of robots will all be an issue. Meanwhile, enterprises will need to transform systems just like they will for generative and agentic AI. In fact, data and AI work today can enable humanoid robots in the future.

"I think we should start now because we're not going to catch up when humanoid robots are all around us. We need to catch up already now, meaning there is the aspect of the simulation, the reinforcement learning, all of these technologies in place when we have the humanoids," said Gal.

Edge computing integration with core enterprise systems will be essential. Franz Gilbert, Global Growth Leader for Human Capital Ecosystems and Alliances at Deloitte, said enterprises need to think beyond just training a humanoid robot to pick up a can. Humanoids will be able to pick up that can and tie into the inventory system. "Every client's infrastructure, tech stack and environment will be different. How do you train the robot and what does the integration look like?" asked Gilbert.

The future of work. As with AI, enterprise value will require a lot of culture and change management. With humanoid robots, Gilbert noted that "over 87% of the roles will be redesigned in order to take advantage of what a humanoid robot can do."

Like AI and automation, the big decision revolves around where you insert the human into the process. Gilbert noted that complex decision-making will rest with humans. Emotional intelligence will also require humans. "There's also a social interaction piece. Humanoid robots can't read facial expressions at this point," said Gilbert, who said that tasks that need to be done quickly may be suited for humanoids, but humans will need to handle EQ-heavy items. "We're going to have to start dividing those tasks within roles and redesigning them."

For instance, think of healthcare and humanoid robots making patient checks instead of nurses. Think of the guardrails required as humanoid robots are connected to electronic health records. How much do these robots need to look like humans? Do they scale?

These humanoid robotic roles will vary by industry. Humanoid robots will apply to multiple industries, but the markets that will develop faster will be manufacturing, industrial, logistics, warehousing, retail and hospitality.

Humanoid robots will become a geopolitical issue. Speaking on DisrupTV, Constellation Research CEO Ray "R" Wang noted that humanoids are going to be a geopolitical issue just like AI and energy--and the tariffs that go with those categories. Wang said: "This is a game about AI, energy and humanoids. China doesn't care if the population dynamics go in reverse. In fact, they're going to replace everything with humanoids and they have the supply chain. If you're getting a $1,000 robot from China vs. a $5,000 root from the US and you're on the battlefield you're going to lose every time on the US side. You're seeing protective tariffs come out against the humanoid supply."

Crawford Del Prete, President of IDC, said China could lead on humanoid robots. "When you get into places like China, you've got a very different legal landscape and amount of data that a humanoid can actually record," he said. "And so, they're able to make different kinds of decisions with those humanoid robots, because they're collecting a lot more data. China could end up pretty far ahead here."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Randstad Digital’s Renganathan on data, AI, CX challenges

Randstad Digital’s Renganathan on data, AI, CX challenges


Raja Renganathan. Chief Growth Officer at Randstad Digital, said enterprises have invested heavily in various platforms, but need to focus on transformation—technical and cultural—to truly break workflow silos with AI.

“Everybody is operating in a silo and the question is 'how do we make an AI first company?'” asked Renganathan.

Randstad Digital focuses on cloud and product engineering, transformation, customer experience and data and AI strategy. The company, a unit of Amsterdam-based Randstad, also offers managed platform services for ServiceNow, Workday, Adobe and Salesforce.

More on CX:

Here are the takeaways from the conversation with Renganathan at Constellation Research’s Ambient Experience Summit:

Digital transformation challenges remain. Companies struggle to fully leverage their digital platforms and integrate new technologies effectively, said Renganathan, an AX100 member. "Customers have invested heavily in these platforms, but they have not used the tools to its best potential,” he added.

Data quality is key to AI success. Clean, consolidated data is essential for successful AI implementation and improving both customer and employee experiences, said Renganathan. AI has been pursued heavily, but many enterprises have had to backtrack to get their data strategy down. "Data is a fulcrum. If the data is not right, then whatever the AI that you're going to implement is not going to help,” he said.

The pressure to adopt AI. There's significant board-level pressure to adopt AI, but most companies are still struggling with effective implementation, said Renganathan. "If you're not talking AI, then you will become irrelevant,” said Renganathan, who noted that there are enterprises that are struggling to put AI to work and deliver returns.

Talent and skills are challenges. Organizations need to focus on continuous skilling and reskilling to prepare for technological transformations. "Do you have the right talent or the skill set to take on projects? That's the biggest problem,” said Renganathan.

Use AI to become more agile in uncertain times. Leaders must focus on transparency, trust, and adaptability to navigate current business challenges. "We are in unprecedented times... we need to be bold. Be transparent, build trust, and ensure adaptability,” said Renganathan.

 

Marketing Transformation Next-Generation Customer Experience Innovation & Product-led Growth Data to Decisions Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

CoreWeave's IPO: What you need to know

CoreWeave's IPO: What you need to know

CoreWeave filed to go public and it's unclear whether the company's debut on the stock market will be a signal of an AI infrastructure top or just a milepost on a multi-year buildout.

One thing is clear: CoreWeave has crazy revenue growth, depends on two customers and is losing hefty sums.

Welcome to the AI infrastructure boom of 2025 (or is that bust?). The feds, OpenAI and others launch Stargate to invest in AI data centers in the US. TSMC is building in the US as geopolitics and AI infrastructure comingle. Capital spending from the likes of Microsoft, Meta, Alphabet and AWS continues to ramp. There's a fancy new large language model (LLM) daily. And Nvidia puts up record growth that Wall Street is taking for granted. Even bitcoin miners are in on the AI infrastructure boom.

This AI infrastructure boom will work well...until it doesn't. DeepSeek spurred fears that may mean we won't need all of this AI infrastructure. Don't worry though. All the big spenders assure us that the capacity is worth it. Erring on the side of not spending a gazillion dollars on AI infrastructure is the real mistake. We're in the FOMO round for GPUs and AI data centers.

With that backdrop, here's what you need to know about CoreWeave, which will be an IPO worth watching simply for AI infrastructure sentiment.

What is CoreWeave? CoreWeave is an AI infrastructure specialist. The company is in the right place at the right time with AI infrastructure. CoreWeave has more than 250,000 GPUs online, 1.3 gigawatts of contracted power, 32 data centers and $15.1 billion in 2024 remaining performance obligations.

The offering, a rocky start and a lower price. CoreWeave initially said it will trade on the Nasdaq under the ticker "CRWV." it is offering 47,178,660 Class A shares. Class A common stock will price between $47 and $55 a share. 

And then things got rocky for CoreWeave. CoreWeave leading up to its IPO became a referendum of AI infrastructure spending and the company wound up cutting its offering and the price. Ultimately, CoreWeave offered 37.5 million shares, down from 49 million shares, priced at $40. That $40 price per share was down from the initial expectation of $55. 

The issue? The Financial Times reported that CoreWeave breached some terms of its $7.6 billion loan in 2024 and triggered defaults. Blackstone amended terms and waived the defaults. In addition, there are signs that big AI data center spenders may be pulling back on aggressive expansion plans. However, CoreWeave remains the biggest US tech IPO since 2021 with plans to raise $1.5 billion. 

The AI stack. CoreWeave's stack of services is designed for AI workloads. CoreWeave's Cloud Platform is designed for uptime and reducing friction of engineering, assembling, running and monitoring infrastructure for AI workloads. CoreWeave has a Nvidia H100 Tensor Core GPU cluster with Nvidia Blackwell coming online. The infrastructure is designed for training as well as inference. "This market is not all about the big cloud vendors in the AI era, but also about smaller vendors in a good position with alternate offerings," said Constellation Research analyst Holger Mueller. "Smaller vendors usually try to win CxOs over due to the simplicity of their offering. A good example is CoreWeave, specializing on GPUs. We will see how it does commercially when it goes public."

The company said:

"We were among the first to deliver NVIDIA H100, H200, and GH200 clusters into production at AI scale, and the first cloud provider to make NVIDIA GB200 NVL72-based instances generally available. We are able to deploy the newest chips in our infrastructure and provide the compute capacity to customers in as little as two weeks from receipt from our OEM partners such as Dell and Super Micro."

The stack looks like this:

CoreWeave's mission is to utilize compute more efficiently for model training and inference. "We believe the AI revolution requires a cloud that is performant, efficient, resilient, and purpose-built for AI," the company said.

CoreWeave will use acquisitions to build out its stack. The company announced the acquisition of Weights & Biases, a major player in the MLOps and LLMOps ecosystem. The deal will give CoreWeave the ability to manage machine learning and model operations. 

"Our combined capabilities will help you get real-time model performance monitoring and robust orchestration, providing you with a powerful AI application development workflow which can accelerate time to production and get your AI innovations to market even faster," said CoreWeave.   

Mueller said:

"CoreWeave management has understood that CxOs want to buy complete solutions and forays into AI application development and related operations. The result is a turnkey cloud that allows to build and operate AI powered next generation application and in one offering." 

Customers. In its SEC filing, CoreWeave noted that its customers include IBM, Meta, Microsoft, Mistral and Nvidia. The company also announced a multi-year deal with OpenAI. CoreWeave will provide AI infrastructure to OpenAI in a contract valued at $11.9 billion. OpenAI will become an investor in CoreWeave via the issuance of $350 million of CoreWeave stock.

Services offered. CoreWeave offers infrastructure cloud services but also has managed software, application software services as well as its Mission Control and observability software.

The debt funding growth. CoreWeave has financed its expansion with debt--$12.9 billion through Dec. 31, 2024, to be exact. That debt is backed by its assets and multi-year committed contracts. RPO was $15.1 billion at the end of 2024, up 53% from a year ago.

Blackstone and Magnetar funded CoreWeave's most recent private debt.

CoreWeave's revenue growth is stunning. In 2022, the company had revenue of $16 million. In 2023, revenue was $229 million. And by 2024, revenue surged to $1.9 billion, up 737% from a year ago. Net losses also surged. In 2024, CoreWeave had a net loss of $863 million and $65 million "adjusted."

Expansion. CoreWeave's plan is to capture more workloads from existing customers, extend into new industries, land enterprise customers and grow internationally. CoreWeave also plans to maximize the economic life of infrastructure. Judging from hyperscalers simply extending the useful life of servers can dramatically boost earnings. CoreWeave segments customers based on AI natives and enterprises.

The risks. CoreWeave's biggest risk is that 77% of its revenue comes from its top two customers. Microsoft was 62% of revenue in 2024. A deal with OpenAI means CoreWeave will likely be dependent on its three top customers. The company said:

"Any negative changes in demand from Microsoft, in Microsoft’s ability or willingness to perform under its contracts with us, in laws or regulations applicable to Microsoft or the regions in which it operates, or in our broader strategic relationship with Microsoft would adversely affect our business, operating results, financial condition, and future prospects.

We anticipate that we will continue to derive a significant portion of our revenue from a limited number of customers for the foreseeable future."

The good news is that CoreWeave's future revenue from OpenAI will bring Microsoft down to less than 50% of revenue. "Microsoft, our largest customer for the years ended December 31, 2023 and 2024, will represent less than 50% of our expected future committed contract revenues when combining our RPO balance of $15.1 billion as of December 31, 2024 and up to $11.55 billion of future revenue from our recently signed Master Services Agreement with OpenAI," the company said. 

Other risk factors include the reality that CoreWeave depends on Nvidia GPU supply, its debt load and access to power. CoreWeave also faces competition from hyperscale cloud players including AWS, Google Cloud, Microsoft Azure and Oracle as well as focused AI cloud providers such as Crusoe and Lambda.

 

 

 

 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Workato enters agentic AI orchestration fray with Workato One, buys DeepConverse

Workato enters agentic AI orchestration fray with Workato One, buys DeepConverse

Workato launched Workato One, a platform designed for agentic AI workflows, AgentX Apps, pre-built agents that orchestrate business processes across multiple systems, and acquired DeepConverse, which specializes in automated customer support.

The news, announced at Workato's Work to the Power of AI event in New York, also included support for Model Context Protocol (MCP). Workato's support for MCP adds to a growing list supporting Anthropic's AI agent standard.

Workato One is a stack that revolves around orchestration of enterprise data, apps and processes as well as managing AI agents. Workato becomes the latest vendor to enter the agentic AI orchestration ring. There is no shortage of AI agent platforms. Oracle launched AI Agent Studio to create and manage AI agents. ServiceNow's latest release of its Now Platform has a bevy of tools to connect agents and orchestrate them. Boomi launched AI Studio. Kore.ai launched its AI agent platform, and eyes orchestration. Zoom evolved AI Companion with agentic AI features and plans to connect to other agents. Salesforce obviously has Agentforce.

Here's how Workato One breaks down:

  • Workato Orchestrate focuses on integrating and orchestrating data, applications, processes and user experiences and couples it with enterprise context and multi-step skills.
  • Workato Agentic is an extension to Workato Orchestrate that builds and manages AI agents.

Holger Mueller, the Constellation Research analyst covering Workato, said the company's move highlights "a strategy, plan, and product that unites the world of AI and orchestration and unlocks the agentic enterprise."

Workato One, which will be integrated with Amazon Bedrock via a partnership with AWS, includes the following:

  • Agent Studio to build, deploy and manage multiple AI agents of all varieties.
  • Agent Hub to create workflows for AI agents.
  • Agent Acumen, which aggregates insights from multiple systems and data.
  • Agent Trust, which includes security and governance across agents and processes.
  • Agent Orchestrator to orchestrate AI agents from CRM, ER, HR, IT and finance as well as custom agents.
  • AIRO, or AI-driven, Intent-based, Real-time Orchestrator, to understand business problems and intent.
  • MCP, which provides standardized access to pre-built enterprise skills through Anthropic Claude.

In addition, Workato launched AgentX Apps to integrate with various functions. AgentX Apps launch with availability for AgentX Sales, AgentX Support, AgentX IT, and AgentX CPQ.

Separately, Workato said it acquired DeepConverse, which was founded in 2016. DeepConverse will add expertise in search AI and AI support agents to go along with Workato's orchestration platform.

Terms of the deal weren't disclosed.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI's support puts MCP in pole position as agentic AI standard

OpenAI's support puts MCP in pole position as agentic AI standard

OpenAI's support of Anthropic's Model Context Protocol (MCP) may be the start of easier interoperability among AI agents.

The large language model (LLM) giant announced its support for MCP for the OpenAI Agents Software Development Kit (SDK) with plans to enable it for the OpenAI API and ChatGPT Desktop.

Anthropic open sourced MCP in November 2024 to connect AI assistants to the systems where data lives--content repositories, business applications and enterprise environments. The idea behind MCP is to break down data silos and legacy systems for easier integration across connected systems.

For agentic AI, these data silos can be dealbreakers. To date, AI agents are typically pitched by vendors within a specific platform. These vendor visions typically put their own platforms at the center of the enterprise universe, but the reality is that agentic AI will need to traverse multiple systems and platforms. What was missing was a standard to enable AI agents to communicate and negotiate.

Perhaps, OpenAI's support will make MCP standard. What remains to be seen is whether the hyperscale cloud providers and SaaS giants get behind MCP. Given Anthropic, OpenAI and Microsoft are supporting MCP it's likely others will have to follow or create dueling standards for AI agent connections.

On X, OpenAI CEO Sam Altman announced the MCP support. MCP also was just updated with an authorization framework based on OAuth 2.1, streamable HTTP transport and support for JSON-RPC batching. Microsoft is also supporting MCP and launched a new Playwright-MCP server that enables AI agents to browse the web and interact with sites.

Box CEO Aaron Levie, who has been talking AI system interoperability, on X and LinkedIn said the OpenAI MCP support is critical for coordinating across platforms. "As AI Agents from multiple platforms coordinate work, AI interoperability is going to be critical," said Levie.

Constellation Research analyst Holger Mueller said:

"The question of 2025 will be - will the LLM be in charge of its enterprise tooling, or will enterprise software vendors build their own pre-director to go to LLM vs. more traditional deterministic algorithms. The latter will give the vendor (and this the customer) more control & there former is easier from an R&D perspective, but a vendor chosing the LLM route will have invest in seeding tools for multiple LLMs. MCP may become  the standard to simplify this issue."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Databricks forges partnership with Anthropic, adds innovative system to enhance open source LLMs

Databricks forges partnership with Anthropic, adds innovative system to enhance open source LLMs

Databricks inked a five-year partnership with Anthropic to offer Claude models directly through the Databricks Data Intelligence Platform. Databricks also highlighted a system to enhance large language model performance without requiring label data.

With the Anthropic deal, Databricks will be able to add Claude 3.7 Sonnet, Anthropic's latest LLM, natively to its platform. Databricks said the Anthropic's models can be paired with its own Databricks Mosaic AI models.

The Anthropic models are available on Databricks on AWS, Azure and Google Cloud.

Databricks' deal highlights how data platforms are increasingly looking to add top shelf models. For instance, Snowflake announced a partnership to add OpenAI's ChatGPT to its platformThe data platform space has seen a flurry of deals and partnerships. SAP and Databricks paired up on SAP Business Cloud. IBM acquired DataStax to add to its watsonx platform. Salesforce and Google Cloud also expanded a partnership that includes Data Cloud.

According to Databricks, the plan is to enable Anthropic models to "reason over their enterprise data." Databricks Mosaic AI has the tools to build domain-specific AI agents on unique data. The hope for Databricks is that enterprises will pair up Anthropic and Mosaic AI.

What remains to be seen is how many Databricks customers are already leveraging Anthropic models via AWS and Google Cloud already.

Separately, Databricks outlined TAO (Test-time Adaptive Optimization), an approach that enhances LLM performance on a task without labeled data. Real-time compute augments an existing model for tuning. TAO only needs LLM usage data but can surpass traditional fine tuning on labeled examples.

According to Databricks, TAO can enable open-source models outperform proprietary models.

In a blog post, Databricks outlined how TAO improved performance of Llama 3.3 70B by 2.4%. Although TAO may not push Llama over proprietary models in all categories, Databricks does get the model close.

TAO is available in preview and Databricks said it will be embedded in several products in the future.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity databricks Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Quantinuum, partners create true verifiable randomness, eye quantum computing for cybersecurity

Quantinuum, partners create true verifiable randomness, eye quantum computing for cybersecurity

Quantinuum quantum computers have created true verifiable randomness in a project that could be valuable to cybersecurity.

In a paper in Nature, Quantinuum along with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas have generated true randomness critical to cryptography and cybersecurity. Quantinuum said the latest advance was built on research from Shih-Han Hung and Scott Aaronson of the University of Texas at Austin.

Quantinuum's breakthrough is just the latest in the industry to demonstrate commercial relevance. Earlier this month, D-Wave said its quantum computer outperformed a classical supercomputer in solving magnetic materials simulation problems. D-Wave followed up with a quantum blockchain architecture. IonQ and Ansys said they also outperformed classical computing when designing medical devices.

JPMorganChase noted in a blog post:

"Classical computers cannot create true randomness on demand. As a result, to offer true randomness in classical computing, we often resort to specialized hardware that harvests entropy from unpredictable physical sources, for instance, by looking at mouse movements, observing fluctuations in temperature, monitoring the movement of lava lamps or, in extreme cases, detecting cosmic radiation. These measures are unwieldy, difficult to scale and lack rigorous guarantees, limiting our ability to verify whether their outputs are truly random.

Compounding the challenge is the fact that there exists no way to test if a sequence of bits is truly random."

Conversely, quantum computing features randomness and can run verification much faster than a classical computer.

Quantinuum said it will introduce a new product that can generate these "random seeds." Using Quantinuum's H2 System, the company has been able to deliver a proof of concept that bridges quantum computing and security.

The company said it will integrate quantum-certified randomness into its commercial portfolio to go along with its Generative Quantum AI and Helios system as well as the hardware roadmap going forward. See: Quantinuum launches generative AI quantum framework, sees quantum computing as synthetic data generator

For Quantinuum, the true randomness breakthrough could give it a key commercial product for enterprises. Helios is in its testing phase and will be available later in 2025. The system is likely to be initially used as part of a cybersecurity portfolio to create a "quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations." 

"The quantum industry is scrambling to show what practical valid use cases it can operate in 2025 and beyond," said Holger Mueller, analyst at Constellation Research. "It is not clear which use cases will emerge first, but this work by Quantinuum and partners, shows that horizontal use cases, like the generation of randomness – maybe the first practical use case of quantum computing."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Quantum Computing Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Nvidia GTC, NEW Constellation Analyst, AI-Powered CSR | ConstellationTV Episode 101

Nvidia GTC, NEW Constellation Analyst, AI-Powered CSR | ConstellationTV Episode 101

ConstellationTV Episode 101 is here! 📺 Co-hosts Liz Miller and Holger Mueller cover #enterprise news updates, including NVIDIA GTC and Adobe Summit announcements around #AI innovation and agentic solutions.

Next, catch a light-hearted Salon50 interview with Constellation's NEW analyst Michael Ni. You'll get an entertaining introduction to Mike, learn his coverage areas, and hear fun facts about everyone involved!

Finally, R "Ray" Wang interviews IBM's VP & Chief Impact Officer Justina Nixon-Saintil about IBM's mission to use AI for good. This means up-skilling employees, creating personalized learning pathways, and increasing productivity through innovative AI solutions.

Don't forget to watch until the end for bloopers! 👇 

00:00 - Meet the Hosts
0:20 - Enterprise Technology News
14:39 - Meet Constellation VP & Analyst Mike Ni
27:27 - IBM Uses AI for Good
32:00 - Bloopers!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/Zs6HtVXqChE?si=df0RtitsVZamdfsB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>