Results

IBM's AI Data Lakehouse: Bridging Structured & Unstructured Data Insights

IBM's AI Data Lakehouse: Bridging Structured & Unstructured Data Insights

Want to see how #AI is making complex #data management look easy?💡 

At IBM Think 2025, Constellation analyst Holger Mueller interviews IBM's Miran Badzak and Edward Calvesbert about how companies can transform their data into a strategic asset.

Watch the full interview to understand:

? How to unlock 90% of your unused data
? AI tools that simplify database management
? Breakthrough technologies reshaping#enterprise intelligence

Learn more here ?? https://lnkd.in/eUBmtjhx

On <iframe width="560" height="315" src="https://www.youtube.com/embed/hQRxUXsvfzY?si=64A_Sc5nXbMnMtzY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Zscaler's master plan: Combine Zero Trust, data fabric and agentic AI

Zscaler's master plan: Combine Zero Trust, data fabric and agentic AI

Zscaler has its own plans to consolidate your cybersecurity budget as it branches out from network security to securing data and agentic AI operations.

The company, which is known for its Zero Trust architecture, has been on a tear financially as it fleshes out its vision makes acquisitions to extend its platform.

CEO Jay Chaudhry laid out the vision at the company's annual Zenith conference this week. "Our strategy is to make sure we secure your data no matter where it is with one policy. When all traffic is going through us when it goes to Internet and knowing that all data leads to the Internet, we are in the best position to really provide holistic data security for us," he said.

That data and network security approach will be very relevant as AI agents proliferate. He said securing AI agents is a natural extension of Zscaler's footprint.

"Zscaler is securing users to have right access to right application. AI agents are able to do same kind of stuff that your people did. I know many call center customers who are using call center agents. There will be other agents like that. That's no different than what we do. We are securing users and with similar technology with some of the additions will secure agents as well," said Chaudhry. "Identity of the agent becomes one piece of it. And there's some other things related to if agents are reaching out to LLMs and some of the other apps. We have some enhancement being made, but we are more natural than anybody else to solve this."

To deliver that vision, Zscaler has been busy.

  • At Zenith, the company launched a set of updates to its Zscaler Zero Trust Exchange platform including a unified appliance for Zero Trust Branch, which secures communications between branches, campuses, factories and various IoT devices.
  • The company also launched its Zero Trust Gateway for Cloud Workloads and Zscaler Micro segmentation for Cloud Workloads. Both efforts secure traffic and data running in hybrid environments.
  • Zscaler also outlined a set of AI tools including AI-powered Data Security Classification, generative AI predictions with prompt visibility, AI segmentation and Zscaler Digital Experience (ZDX) Network Intelligence.
  • The company also said it would acquire Red Canary, which is known for managed detection and response (MDR). Zscaler said the addition of Red Canary will give it automated and agentic workflows that can leverage Zscaler's data on its security cloud and intelligence from its research team.

Chaudhry said that Zscaler's purchase of Red Canary will give it the ability to power next-gen security operations centers (SOCs) with AI agents. Zscaler previously acquired the data fabric piece of the equation when it bought Avalor a little more than a year ago.

"The message here is really building a number of agentic technologies, agentic task agents that can do particular task, perceiving, reasoning, action being taken. It's getting very exciting, and they're all coming together," said Chaudhry.

Strong quarter lumpy cybersecurity industry

It's early in Zscaler's transformation to expand its total addressable market. The company's latest quarter stood out amid rivals that stumbled. CrowdStrike's quarter was a disappointment and Palo Alto Networks sold off despite a strong quarter.

Zscaler reported a fiscal third quarter net loss of $4.1 million, or 3 cents a share, on revenue of $678 million, up 23% from a year ago. Non-GAAP earnings were 84 cents a share. Those results were well ahead of estimates.

As for the outlook, Zscaler projected revenue of $705 million to $707 million with non-GAAP earnings of 79 cents a share to 80 cents a share.

For fiscal 2025, Zscaler is projecting revenue of $2.659 billion to $2.66 billion.

Chaudhry said the Zscaler platform has more than 50 million users and that is creating a network effect for data. The company's Zero Trust Exchange processed more than 100 trillion transactions in the last year, blocked 60 billion threats and enforced 5 trillion policies.

The game plan for Zscaler is clear: Take that data flywheel, which generates more than 20 petabytes of data, and use it to power the AI agents that'll automate cybersecurity operations.

"While legacy vendors are attempting to cobble together disjointed point products and calling it a platform, we are constantly expanding our core Zero Trust exchange by integrating new functionality to solve more and more of our customers' security concerns," said Chaudhry.

He noted that customers remain cautious about IT spending, but they're interested in taking out costs. The big question is whether Zscaler, Palo Alto Networks or CrowdStrike turns out to be the security budget consolidator over time.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

MongoDB reports strong Q1 with revenue growth of 22%

MongoDB reports strong Q1 with revenue growth of 22%

MongoDB reported strong first quarter results powered by revenue growth of 22% from a year ago.

The company reported a first quarter net loss of $37.6 million, or 46 cents a share, on revenue of $549 million, up 22% from a year ago. Non-GAAP earnings in the quarter were $1 a share.

Wall Street was expecting MongoDB to report first quarter non-GAAP earnings of 67 cents a share on revenue of $528 million.

Keep in mind that MongoDB's recent quarterly results have been lumpy with hits and misses depending on consumption. MongoDB's outlook fell short of expectations following its fourth quarter results.

CEO Dev Ittycheria said the company is off to a strong start with Atlas revenue growth of 26%. MongoDB also said it had the highest total net customer additions in six years. "We are confident in our position to drive profitable growth as we benefit from this next wave of application development," said Ittycheria.

As for the outlook, MongoDB said second quarter sales will be between $548 million to $553 million with non-GAAP earnings of 62 cents a share to 66 cents a share. Analysts were modeling second quarter non-GAAP earnings of 58 cents a share on revenue of $549.28 million.

MongoDB added that annual revenue will be between $2.25 billion and $2.29 billion, up from its previous guidance $2.24 billion to $2.28 billion. The company said non-GAAP earnings for the year will be between $2.94 a share to $3.12 a share.

During the quarter, MongoDB launched its MongoDB Model Context Protocol (MCP) server, named Mike Berry CFO and launched two new Voyage AI retrieval models.

Constellation Research analyst Holger Mueller said:

"MongoDB is growing nicely, fueled by the need data for AI, delivered in the cloud. Also good to see record new customer additions. The challenge remains for MongoDB to turn a profit, From about $100 million more in revenue, only $43 million made it to reducing its net loss. The good news is Dev Ittycheria kept Sales and Marketing constant, reduced G&A and R&D took in $22 million, which is key in the current period of innovation. Now the question is – can MongoDB repeat the same trick for Q2 and then break out a small profit? The growth engines remain the same for Q2."

Data to Decisions mongodb Big Data Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer

AI SRE, Tech Acquisitions, Infrastructure Transformation | CRTV Episode 106

AI SRE, Tech Acquisitions, Infrastructure Transformation | CRTV Episode 106

ConstellationTV Ep. 106 is live! This week, co-hosts Larry Dignan and Martin Schneider break down key shifts in #enterprise tech...

💡  Snowflake Summit: A look at leadership’s vision and #AI-powered innovation.
 ðŸ¤ Salesforce + Informatica: What the acquisition says about the future of #data and AI strategy.
 ðŸ§  AI Infrastructure: Esteban Kolsky explains why enterprises are moving toward hybrid and private AI models for more control and agility.
 ðŸš€ Startup Spotlight: Meet Ciroos. CEO Ronak Desai shares how the company is reimagining site reliability engineering (#SRE) with AI technology that will reduce incident response times and streamline operations.

Watch the full episode below for insights that matter to #IT leaders, data professionals, and technology decision-makers.

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth Marketing Transformation New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Product Officer Chief Revenue Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/HBlu3iOF6TE?si=KwNMSlfnWFNJ3s2_" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Amazon revamps supply chain, last mile delivery, warehouses with AI models

Amazon revamps supply chain, last mile delivery, warehouses with AI models

Amazon is throwing its AI foundation model weight behind its supply chain as it optimizes routes with SCOT (Supply Chain Optimization Technology). With the move, Amazon is using its generative AI tools to highlight the returns that'll show up in its earnings results.

The company also highlighted its robotics, agentic AI and physical AI efforts.

At an event, Amazon outlined SCOT, which will touch every Amazon package in its supply chain. SCOT has an AI foundational model that powers its supply chain. Today the model processes more than 400 million items across 270 different time spans.  Strategically, this supply chain, last mile delivery and robotics advance with foundational models makes a lot of sense. First it improves operations and drives real returns for Amazon. Second it's a nice showcase and first customer reference for Amazon Web Services (AWS).

Amazon CEO Andy Jassy has made a point of highlighting how AI is helping the company's overall operations. At re:Invent 2024, Jassy talked extensively about how continous improvement in the supply chain can save a few pennies per package that add up to billions of dollars at scale. He also noted robotics and automation advances in warehouses and distribution centers. 

Constellation Research CEO R "Ray" Wang was at the Amazon event and noted:

"Amazon is showing the power of Exponential Efficiency. Just like Uber optimized ride batching, dynamic pricing, and route optimization, Amazon is using its data to drive down costs, improve customer experience, reduce delivery times and perfect orders. Digital giants in an AI age have the ability to use their data to create massive operational efficiencies and improve customer experience at machine scale."

Key points about the SCOT model include:

  • SCOT is predicting what customers want before they click the buy button to reduce delivery ties by almost a day while lowering carbon emissions.
  • The model predicts where they'll want orders delivered and when. SCOT also recognizes local demand patterns.
  • The supply chain model ingests weather patterns and planned promotions as well as traditional data.
  • Amazon said SCOT enables the company to position inventory closer to customers for fewer miles driven.
  • So far, SCOT has driven a 10% improvement in long-term national forecasts and a 20% improvement regionally.
  • SCOT is live in US, Canada, Mexico and Brazil. EU and other countries will be live in the near future.

Last mile genAI meets physical AI

Amazon also launched last mile generative AI mapping that leverages satellite imagery, road networks, land parcels and building footprints along with delivery scan data and GPS data from past deliveries. All of Amazon's data is used to produce a model of where packages will be dropped off.

The company said that advances in foundational models allowed it to scale reasoning and perception across petabytes of data to fine tune without humans.

In October 2024, Amazon launched the first version of the mapping technology and was able to map more than 2.8 million apartment addresses as well as 4 million parking locations.

This data fed a more accurate geospatial model across the US that will now scale by 10x in 2025. Key items include:

AI mapping helps Amazon drivers navigate university campus and office complexes by identifying optimal parking locations.

By November, Amazon is on track to refine apartment address to building mappings for more than 11 million apartments across 700,000 campuses.

Amazon said that it will have learned more than 130 million delivery locations, 200 million parking spots and 800,000 building entrances.

Agentic AI and robotics

Amazon also outlined how Project Vulcan is combining robotics and agentic AI.

The project enables robots to hear, understand natural language and act autonomously.

According to Amazon, the goal is to create systems of robots instead of specialists so they will be more valuable in warehouse roles as assistants.

The company noted that it's moving "beyond one brain per robot” to fine-tuning a single large model that works across multiple robots, tasks and sensory inputs.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon Supply Chain Automation Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software IoT Blockchain ERP Leadership Collaboration M&A ML Machine Learning LLMs Agentic AI Generative AI AI Analytics business Marketing SaaS PaaS IaaS Next Gen Apps CRM finance Healthcare Customer Service Content Management Chief Supply Chain Officer Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI's enterprise business surging, says Altman

OpenAI's enterprise business surging, says Altman

OpenAI CEO Sam Altman said the company's enterprise unit is doing well as businesses continue to invest in large language models and increasingly AI agents.

Speaking at Snowflake Summit 2025, Altman said the enterprises that have learned to iterate quickly are doing best. The challenge for enterprises is that AI is changing so quickly and that usually favors the agile, said Altman, who appeared on stage with Snowflake CEO Sridhar Ramaswamy.

"There's still a lot of hesitancy, and the models are changing so fast, and there's always a reason to wait for the next model," said Altman. "But when things are changing quickly, the companies that have the quickest iteration speed, make the cost of mistakes low and have a high learning rate win."

He added that enterprises are clearly making early bets.

Altman said that a year ago, he would have recommended startups run toward generative AI and enterprises should wait for more maturity and opt for pilots over production. Today, generative AI is more mainstream and OpenAI's enterprise business is seeing strong demand.

"Big companies are now using us for a lot of stuff. What's so different? They say it just took a while to figure it out. That's part of it. But the models just works so much more reliably. It does seem like sometime over the last year we hit a real inflection point for the usability of these models," said Altman.

Altman's comments landed a few days ahead of OpenAI's rollout of connectors to Dropbox and OneDrive for ChatGPT Team, Enterprise and Education users. The company also said Model Context Protocol (MCP) support is coming to Pro, Team and Enterprise accounts. 

OpenAI said it has 3 million paying business users, up from 2 million in February. 

Altman added:

"I think we'll be at the point next year where you can not only use a system to automate products and services, but the models will be able to figure out things that teams of people on their own can't do. And the companies that have gotten experience with these models are well positioned for a world where they can use an AI system to solve the most critical project. People who are ready for that, I think will have another big step change next year."

According to Altman, LLMs are more like interns today, but at some point soon they will be "more like an experienced software engineer."

"You hear about companies that are building agents to automate most of their customer support, sales and any number of other things. You hear people who say their job is to assign work to agents and look at quality and see how it fits together as they would with a team of relatively junior employees. It's not evenly distributed yet, but that's happening. I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge or can figure out solutions to business problems that are non-trivial. Right now, enterprises are focused on repetitive cognitive work to automate over a short time horizon. As that expands to longer time horizons and higher and higher levels, you get an AI scientist, an AI agent that can discover new science. That will be a significant moment in the world."

Other takeaways from Altman:

The ideal model. Altman said the ideal is "a very tiny model that has superhuman reasoning capabilities." "It can run ridiculously fast and 1 trillion tokens of context and access to every tool you can possibly imagine. And so it doesn't kind of matter what the problem is. Doesn't matter whether the model has the knowledge or the data in it or not," said Altman, who noted that framework isn't something that OpenAI is about to ship.

Altman added that using the models as a database is "sort of ridiculous" and expensive.

Prioritizing compute. Altman said that enterprises using the latest models are seeing real returns and you could solve hard problems with unlimited compute, but that's not realistic. Companies will get to the point where they will "be willing to try a lot more compute for the hardest problems and most valuable things," said Altman.

Data to Decisions Future of Work Chief Information Officer

Nvidia releases MLPerf Training results as it ramps AI factories

Nvidia releases MLPerf Training results as it ramps AI factories

Nvidia said its GB200 NVL72 rack scale systems outperformed Hopper by a wide margin based on MLPerf Training submissions across categories.

The company outlined the benchmark as its Blackwell instances--GB200 NVL72--are now generally available from Microsoft Azure, CoreWeave and Google Cloud with more providers on deck.

Nvidia was the only platform that submitted for all benchmarks including the new Llama 3.1 405B pre-training test. Nvidia was also out to show the benefits of its complete stack with fifth-gen NVLink and NVLink Switch delivering 2.6x more training performance per GPU compared to Hopper.

Here's a look at the results, which can be found at Mlcommons.org, which oversees MLPerf Training metrics. MLPerf was developed by MLCommons Association to create a standard benchmark for AI workloads. The results are peer reviewed.

In a 2023 presentation on Hopper performance, Microsoft Azure was predominantly featured. This version of MLPerf Training performance was more about the AI factory. Constellation Research analyst Holger Mueller said:

"Nvidia once again has shown that it provides the best multi-cloud and on premise AI architecture for all three critical training use cases – pre-training, post training and test time scaling. Adoption across cloud and hardware vendors remains impressive. Apart from Oracle, the big three cloud providers are not part of the current Nvidia presentation. Microsoft Azure was a key feature back in fall of 2023. It is too early to overinterpret any of these changes in the presentation – but all three major cloud providers also provide their inhouse AI chip architectures."

The AI factory strategy

The MLPerf Training metrics are just part of Nvidia's overall AI Factory strategy. Nvidia CEO Jensen Huang has repeatedly argued that AI factories are a trillion-dollar opportunity that will replace traditional data centers. Every company will eventually have an AI factory to power operations.

According to Nvidia, AI factories will generate revenue as we move to a token-based, data economy. The job for Nvidia is to build that optimal integrated stack of GPUs, compute, networking and storage to efficiently scale to gigawatt AI factories. Nvidia's ultimate mote is likely to be its application stack that features everything from models, AI agents, digital twins and an enterprise suite.

These AI factories will come in four flavors: Cloud, enterprise, sovereign AI and industry-focused.

Nvidia will need to show performance gains annually to justify its annual cadence for AI infrastructure. Here's what's on tap.

  • Blackwell (2025): Current generation with GB200 and GB300 variants
  • Rubin/Rubin Ultra (2026): Next generation
  • Kyber (2027): Future architecture
  • Fineman (2028): Long-term roadmap

Data to Decisions Tech Optimization nvidia Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

HPE Q2 solid due to AI demand, hybrid cloud

HPE Q2 solid due to AI demand, hybrid cloud

Hewlett Packard Enterprise delivered better-than-expected second quarter earnings as it saw strong demand for its AI servers and hybrid cloud.

The company reported a second quarter net loss of 82 cents a share due to a goodwill write down. HPE reported second quarter non-GAAP earnings of 38 cents a share on revenue of $7.6 billion, up 6% from a year ago.

Wall Street was expecting HPE to report second quarter earnings of 33 cents a share on revenue of $7.5 billion.

HPE's first quarter results fell short of expectations and the company has delivered less AI growth than Dell Technologies. See: Dell Technologies continues to ride AI infrastructure wave with strong Q1

CEO Antonio Neri said, "in a very dynamic macro environment, we executed our strategy with discipline." CFO Marie Myers said the company is focused on streamlining operations and meeting its guidance for fiscal 2025.

By the numbers:

  • HPE server revenue in the second quarter was $4.1 billion, up 6% from a year ago.
  • Intelligent edge revenue was $1.2 billion, up 7% from a year ago.
  • Hybrid cloud revenue was $1.5 billion, up 13% from a year ago.

As for the outlook, HPE said third quarter revenue will be between $8.2 billion to $8.5 billion with non-GAAP earnings of 40 cents a share to 45 cents a share. For fiscal 2025, HPE said revenue growth will be 7% to 9% in constant currency with non-GAAP earnings of $1.78 a share to $1.90 a share.

Constellation Research analyst Holger Mueller said:

"HPE had a solid quarter growing across its offering portfolio. Management decided to ‘hide’ the solid numbers with an impairment charge of the goodwill of its hybrid cloud portfolio, painting the quarter red. It feels more like a strategy to prepare for better quarters and take the charge now. The pressure on Q3 and Q4 will rise. The 2% average higher discounting should not come as a surprise in times of higher uncertainty and more challenging economic conditions."

Neri said the following on HPE's second quarter earnings call:

  • "Through focused and disciplined execution, we have addressed the operational challenges we experienced in our Server segment last quarter. We expect these actions will contribute to margin improvement through fiscal year-end."
  • "The IT industry continues to navigate significant uncertainty brought on by tariffs, the AI diffusion policy withdrawal and broad macroeconomic concerns. While this led to uneven demand during the quarter, we did not benefit from significant order pull-ins. We ended Q2 with a stronger pipeline compared to Q1."
  • "I want to reinforce our commitment to closing the Juniper Networks transaction. We expect the proposed transaction will deliver at least $450 million in annual run rate synergies to our shareholders within 36 months of closing the transaction. The deal will help both companies deliver a modern secure AI-driven edge-to-cloud portfolio of networking products and services. We continue to expect to close the transaction before the end of fiscal year 2025."
  • "We reduced inventory by $500 million. We believe that the remaining actions will be addressed through the back half as we convert more revenue. In Q3, we're going to convert a very large deployment that we expect to be completed here soon."
  • "A third of our orders in AI being now enterprise driven. So that's a very strong momentum there. It's driven by our servers."

Data to Decisions Tech Optimization HPE Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

CrowdStrike Q1, Q2 outlook mixed

CrowdStrike Q1, Q2 outlook mixed

CrowdStrike delivered mixed first quarter results and second quarter outlook.

The company reported a first quarter net loss of $110.2 million, or 44 cents a share, on revenue of $1.1 billion, up 20% from a year ago. Non-GAAP earnings were 73 cents a share.

Wall Street was expecting CrowdStrike to report non-GAAP earnings of 66 cents a share on revenue of $1.11 billion.

As for the outlook, CrowdStrike projected second quarter revenue of $1.14 billion to $1.512 billion. Wall Street was looking for $1.16 billion in revenue for the second quarter. Non-GAAP earnings for the second quarter will be 82 cents a share to 84 cents a share. For fiscal 2026, CrowdStrike projected $4.74 billion to $4.8 billion with non-GAAP earnings of $3.44 a share to $3.56 a share.

The company said it authorized up to $1 billion to buy back shares, which are now up more than 58% from a year ago.

Despite the results that disappointed Wall Street, CrowdStrike delivered a strong quarter. CEO George Kurtz said the company started the fiscal year with a large deal and saw strong net retention. "The scale of Falcon Flex demand and the pace of innovation across AI, next-gen SIEM, cloud, identity, and exposure management advances us towards $10 billion in ending ARR," said Kurtz.

CFO Burt Podbere said CrowdStrike was seeing customers consolidate on Falcon Flex and the company had a strong pipeline for the second half of fiscal 2026.

Kurtz said the following on the earnings conference call:

  • "In less than 2 years since starting Falcon Flex, we've closed more than $3.2 billion of total account deal value across more than 820 accounts that have adopted the subscription model."
  • "A lot of what we're doing with customers is going through the demand plan and our business value assessment, and that's really where we can talk about how we can replace other point products. So typically, the conversation will look at customer road map. They'll look at certainly our road map and the products we have in the 30 modules. And then we'll begin to plan the phased rollout of our products to replace what they have."
  • "When we think about generative AI and really, what I'd call autonomous agents, they have the same needs, but they're superhuman. They have access to data. They have identities. They have access to systems outside of their own environment. They have workflows. They take action. So it's building those guardrails and then instrumenting the visibility and protection across the entire AI workflow. And every agent, and there could be billions of agents, are going to need protection."
Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience crowdstrike AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Revenue Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Ciroos raises $21 million: Here's a look at the strategy via CEO Ronak Desai

Ciroos raises $21 million: Here's a look at the strategy via CEO Ronak Desai


Ciroos raised $21 million to deliver an agentic AI teammate for AI, DevOps and operations teams to automate and cut incident response times by 90%. What's interesting about Ciroos is that it is looking to address gaps in observability and its approach wouldn't have been possible without agentic AI.

The company is looking to solve a big problem for site reliability engineers (SREs)--it's almost impossible to keep up with operations across multiple applications, domains, architectures, and tools, including static runbooks and dashboards. And as enterprises move to AI agents, keeping up with operations is even more challenging. Energy Impact Partners led the funding.

Ronak Desai, co-founder and CEO of Ciroos, said the company built its AI SRE Teammate to "end the toil" for SREs by accelerating root cause identification, automating actions, and giving back time and control to build reliable systems.

Ciroos is betting that it can reimagine observability operations with its AI SRE Teammate to start investigations into anomalies before an expert is paged. Ciroos is using a multi-agent system that correlates data and uses reasoning to identify what is and isn't a problem for operations. According to the company, Ciroos's SRE Teammate supports Modern Context Protocol (MCP) and Agent 2 Agent (A2A) architectures and will integrate with existing observability applications, ticketing systems, collaboration tools, code repositories, and incident response tools.

The company, founded in February by Desai, Amit Patel, and Ananda Rajagopal, has a strong pedigree. Desai was the senior vice president and general manager of Cisco's Observability and AppsDynamics unit and also led Cisco's Cloud Networking Engineering. Patel, CTO of Ciroos, was vice president of engineering at Cisco AppDynamics. Rajagopal, chief product officer at Ciroos, was vice president of product at Cisco AppDynamics and held leadership roles at AWS, Gigamon, and Brocade.

Here are the takeaways from my chat with Desai.

What's the vision for the company? "We're building an AI SRE Teammate to help our operations teams," said Desai. "If you think about modern ID infrastructure, and you have to investigate outages, there are hundreds of experts that need to get involved with lots of tools and dashboards. It takes, on average, two hours to resolve. Ciroos wants to end all of that and give the time back."

He added that the goal is to reduce investigation time to minutes with agentic AI across multiple domains, a vast amount of data and take action with human-like reasoning.

Start with solving a problem. Desai saw the SRE challenges upfront at Cisco's data center group. "I would hear from enterprise customers saying they have lots of tools and in some cases hundreds of tools, not counting security," said Desai. "There's a siloed way of looking at tools and dashboards without correlated insights. If you think about what's happening with AI coding tools and developer productivity, the same thing can happen for SREs."

Once reasoning models became popular, it was clear that Ciroos could solve for cross-domain correlation problems. "The ability to solve that problem with the technology it was a perfect combination for us to get started," said Desai. "What we're doing wouldn't have been possible just using early large language models of 2023 and 2024."

Where Ciroos fits. Desai said its main competition is manual labor, more than existing observability tools. "We do not compete with any of the observability tools," said Desai. "We are competing with manual labor and the toil SREs have. They are looking at hundreds of dashboards and trying to figure everything out and the cognitive load is too much for humans. We're going after that problem and still keeping humans in the loop."

Desai said the Ciroos can bring SREs hundreds of experts to provide insights ready for them when they are woken up at 2 am. Ciroos's SRE Teammate is designed to automate and speed up investigations and proactively investigate issues. "We want to give SREs all the information they need, the root cause analysis and implement remediation," said Desai.

Integrations. Desai said Ciroos SRE Teammate integrates the ecosystem of tools across observability, incident management and ticketing to extract the right set of information from logs, metrics, tracing, and events. "When your human SRE expert gets on the call to investigate an incident, they’re looking at not only historical data but all the signals connected to the live system," said Desai. "We're putting all of that information into a reasoning model that can narrow down problems and determine what's critical."

ROI. Desai quipped that sleep is the best measure of uptime for SREs. "We're really looking to avoid that manual toil and the tasks which we do not want our human SREs to do," said Desai. Cutting down on narrowing down the problem, war room calls, dashboard diving, and time for investigations is the real ROI. The goal is to cut long investigation windows down to minutes.

Extensibility. Ciroos will be built to leverage whatever new advances in models. Cirooswill also be extensible as a way to adapt to new technologies. "Out of the box, we built into agent-to-agent capability where we can hook it into an agent that the customer has deployed or developed on their own," said Desai. "We are working with open and extensible ecosystems so we can work with what the customer's environment is."

What's next? Desai said Ciroos is building out its product for a launch soon and working on its agents with customers as well as inviting other early adopters. The company is also building out multi-domain knowledge for the platform. "The goal is to reduce human toil from SREs so that we can give them the sleep and time back and build a scalable system," said Desai. "We're focusing on the problems that matter to enterprises."

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Information Officer