Results

#ZohoDay2024 Customer Interviews: Matt Kuczka, Luxer One

Liz Miller talks with Matt Kuczka, lead applications analyst for Luxer One about the impact of Zoho Analytics 📊 on supply chain issues for their #enterprise... including accurate and clean #data points, better measurement capabilities, and automated #customer reports.

Today, Luxer One has removed the bulk of data speculation and enabled #accessibility and data-driven decision-making across their enterprise.

👉 If you're considering adopting Zoho #analytics tools, watch the full interview to learn more from Matt's experience...

On <iframe width="560" height="315" src="https://www.youtube.com/embed/dYwQBdjq-fw?si=4wrip6V39Dj9y3bT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

#ZohoDay2024 Customer Interviews: Keith Cooper, Bergen Logistics

Dion Hinchcliffe talks with Keith Cooper of Bergen Logistics about Zoho tools solving lead tracking issues in their enterprise and transforming their customer journey, time taken to resolve issues, helpful sales enablement data, and overall communication across their enterprise teams.

If you're considering adopting the Zoho Suite, watch the full interview about's Keith onboarding experience and continued successful experience at Bergen Logistics.

On <iframe width="560" height="315" src="https://www.youtube.com/embed/8aL8-fMT8N4?si=8JrQuo-vIb2VDdRa" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

#ZohoDay2024 Customer Interviews: Rob O'Brien, ITV

Holger Mueller talks with Rob O'Brien, Head of International Technology for ITV 📺 about how Zoho technology transformed their unstructured #risk data systems through efficient and customizable #storage management that enabled ITV to connect the dots and make data-driven decisions.

If you're considering adopting Zoho #datamanagement systems, watch the full interview to learn more about Rob's experience...

On <iframe width="560" height="315" src="https://www.youtube.com/embed/xMu_unYOlcE?si=gHAYFqpGy8LYHZD4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Broadcom posts solid Q1, reiterates outlook

Broadcom's first quarter results were better-than-expected as technology buyers plan to closely watch comments about retaining VMware customers. Broadcom's chip business continues to benefit from the AI buildout.

The company reported first quarter earnings of $1.32 billion, or $2.84 a share, on revenue of $11.96 billion, up 34% from a year ago. Non-GAAP earnings for the first quarter were $10.99 a share. Revenue growth excluding VMware was 11% in the first quarter.

Wall Street was expecting Broadcom to report first quarter earnings of $10.42 a share on revenue of $11.72 billion.

Broadcom CEO Hock Tan said the acquisition of VMware is "accelerating revenue growth in our infrastructure software segment, as customers deploy VMware Cloud Foundation." He also noted that networking demand was strong as data centers retool for AI.

While Tan talked up VMware, signals in the field are less than positive.

Broadcom reiterated its fiscal 2024 guidance of revenue of $50 billion and adjusted EBITDA of $30 billion.

In the first quarter semiconductor revenue was 62% of revenue with infrastructure software (CA and VMware primarily) was 38%.

Tan said VMware will grow at a double-digit percentage rate sequentially. "This is simply a result of our strategy with VMware," said Tan. "We are focused on upselling customers, particularly those who are already running their compute workloads on vSphere virtualization tools, to upgrade to VMware Cloud Foundation otherwise branded as VCF."

"VCF is the complete software stack, integrating compute, storage and networking that virtualizes and modernizes our customers data centers. This on-prem self-service cloud platform provides our customers with a competent and an alternative to public cloud."

In other words, Tan's bet is that if VMware upsells you will come. Tan also said that AI workloads will be more on-premises for cost savings and that means VCF.  

Tan was asked about VMware upselling with a bit of skepticism. Tan said:

"We're very focused on selling upselling and helping customers to not just buy but deploy this private cloud, but what we call virtual private cloud solution or platform on their on-prem data centers. It has been very successful so far, and I agree that it's early evenings at this point. We've been very prepared to launch this push on private cloud."

Tan was also asked about VMware's annual revenue run rate and Tan agreed that the company is still at a $11 billion to $12 billion pace. Analysts were focused on VMware questions even after starting with AI questions. 

Tan said Broadcom is focusing on go-to-market and engineering VCF so it is more easily deployed. He added that the focus is on about 1,000 strategic customers that will be on-premises and leveraging hybrid deployments. "Most of these customers do not have an on-prem data center that resembles what's in the cloud, which is very high availability. very low latency, and highly resilient," said Tan. "What we are offering with VCF replicates what you get in the public cloud and we are seeing it in the level of bookings."

Tech Optimization vmware Chief Information Officer

MongoDB Q1, fiscal year outlook light, but eyes stable workload gains

MongoDB's outlook for the first quarter and fiscal year was lighter than expected even as its fourth quarter results were strong.

First, the good news. MongoDB reported a fourth quarter net loss of $55.5 million, or 77 cents a share, on revenue of $458 million, up 27% from a year ago. Non-GAAP earnings for the fourth quarter were 86 cents a share. MongoDB was expected to report fourth quarter earnings of 48 cents a share on revenue of $435.55 million.

As for the outlook, MongoDB projected first quarter revenue of $436 million to $440 million with non-GAAP earnings of 34 cents a share to 39 cents a share. Wall Street was looking for revenue of $449.08 million with non-GAAP earnings of 61 cents a share.

For fiscal 2025, MongoDB projected revenue of $1.9 billion to $1.93 billion, or $2.27 a share to $2.49 a share. Wall Street was expecting annual sales of $2.03 billion with non-GAAP earnings of $3.22 a share.

MongoDB's outlook landed a few days after Snowflake took a hit on its outlook and CEO change.

In a statement, MongoDB CEO Dev Ittycheria said the company "we will continue to invest in our key product development and go-to-market initiatives." The company ended its fourth quarter with more than 47,800 customers and 2,052 customers paying more than $100,000.

“MongoDB’s results reflect the belt tightening and slower growth we’re see across the mainstream IT market outside of the white-hot pockets tied to AI,” said Doug Henschen, VP and principal analyst at Constellation Research. “Nonetheless, MongoDB turned in another quarter of steady, double-digit growth and has plenty of relational migration and net new customer win opportunities to continue on its current healthy growth path.”

For fiscal 2024, MongoDB reported a net loss of $176.6 million, or $2.48 a share, on revenue of $1.68 billion, up 31% from a year ago. Non-GAAP earnings were $3.33 a share.

On a conference call with analysts, Ittycheria said he saw solid and stable growth for consumption. 

"Overall we are pleased with our performance in the fourth quarter. We had a healthy quarter of your business led by continued strength and new workload acquisition within our existing Atlas customers. I see stable consumption growth going into next year. Consumption trends have been steady for several quarters now."

However, Ittycheria noted that it's early in the AI application buildout and enterprises need to move beyond pilots to ramp consumption at scale.

He laid out the progression. 

"I strongly believe that AI will be a significant driver of long term growth for MongoDB. We are in the early days of AI akin to the dial-up days of the Internet era. To put things in context, it's important to understand that there are three layers to the AI stack. The first layer is the underlying compute and LLM. The second layer is the fine tuning of models and building of AI applications. And the third layer is deploying and running applications that end users interact with. MongoDB strategies to operate as a second and third layers."

Today, the vast majority of AI spend is happening the first layer that is investment in compute to train and run LLM neither are areas in which we compete. Our enterprise customers today are still largely in the experimentation and prototyping stages of building their initial AI applications.

We expect that will take time for enterprises to deploy production workloads at scale. 

Platforms like MongoDB will benefit as customers build AI applications to drive meaningful operating efficiencies, create compelling customer experiences, and pursue new growth opportunities."

The short version is that MongoDB expects the fiscal 2025 consumption and adoption trends to be similar to fiscal 2024. Ittycheria said MongoDB will focus on workload acquisition, building out its go-to-market operations and hone its migration game from relational databases. "This year we are investing in a number of pilots leveraging AI for relational migrations paired with services to substantially simplify and scale the process," he said. 
 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity mongodb Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

For years, the promise of the public cloud has been the primary end-game for IT infrastructure, offering enterprises the most flexible and scalable platform for their SaaS applications. These applications, characterized by their inherently dynamic nature, typically experience significant fluctuations in usage, a key aspect that public cloud was designed explicitly to address. However, times are changing and AI is now poised to upend cloud economic by altering the very nature of workloads, just as cloud spend becomes the top issue for CIOs when it comes to IT infrastructure.

Traditional Web workloads have long exhibited spiky usage patterns, with traffic surging during peak hours (e.g., Black Friday sales) and plummeting during off-peak times and unexpected events. This unpredictable demand curve has aligned perfectly with the on-demand nature of cloud resources – businesses can easily scale their cloud instances up or down to meet the fluctuating needs of their Web applications, paying only for the resources they utilize. However, the overall complexion of cloud workloads has recently shifted due to generative AI, throwing capacity planning into flux. A widely-folowing 2023 study by Flexera, found that optimizing cloud costs has just moved to the top priority of cloud teams (64% of respondents), as they struggle with both workload forecasting and the growing impact of AI on their cloud compute mix.

So there's little question now: The arrival of generative AI has officially thrown a wrench into this well-established dynamic. Unlike Web applications, which exhibit intermittent bursts of high compute demand, AI models are insatiable consumers of continuous compute power, requiring consistent and substantial resources throughout both their training and operational phases. Training a large language model, for instance, can devour vast amounts of compute power for weeks or even months on end, relentlessly pushing the boundaries of available processing capacity. This long-term hunger for compute resources has ignited a fierce competition among cloud providers, each vying to be the leader in AI in the cloud. Evidence for this abouds: GPUs, the chips which provide most of the capacity for training and running AI models, has recently propelled the stocks of AI chip providers like NVIDIA into historic terrorities, and the trend is just beginning.

Cloud Consumption in Generative AI Era - Where Workloads Run Best

The top three hyperscalers, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), are each aggressively optimizing their public cloud offerings for AI workloads, investing heavily in the development of both AI cloud capacity as well as their own custom AI chips including Google's well-established Tensor Processing Unit (TPU), AWS's new Tranium2 chip, and Microsoft's upcoming Maia 100 AI processor which each compete with Nvidia's data center-friendly H200. These specialized chips offer significantly improved performance and efficiency for AI workloads compared to traditional CPUs. Of these, notably, only the H200 will be widely available for use within private enterprise data centers, thus making AI chips an emerging risk factor for a new type of cloud lock-in, to add to the concerns of CIOs seeking to come to grips with this new cloud landscape.

While this relentless pursuit of ever-greater AI performance promises significant advancements in various fields, it comes at a substantial additional cost. Hyperscalers must factor in the research and development expenses associated with creating custom AI chips and cloud-based AI tools, alongside the razor-thin profit margins typical of the cloud industry. Additionally, the staggering infrastructure required to support these highly demanding AI workloads translates into significant long-term capital expenditure for cloud providers. Ultimately, these costs are passed on to the cloud customer in the form of service fees, raising a critical question: as AI workloads continue to grow in complexity and resource demands, is the public cloud the most cost-effective solution for most AI workloads in the long run? This is the fundamental question today as AI becomes a growing percent of overall compute utilization.

AI Greatly Tranforms Cloud Economics

The shift towards AI workloads throws a stark light on the limitations of traditional cloud pricing models designed for bursty web applications. Unlike CPUs, which can be easily ramped up or down, GPUs, the workhorses of AI training and inference, are a different beast altogether. These specialized processors excel at parallel processing, making them ideal for the computationally intensive tasks involved in AI. However, unlike CPUs that can be idled during downtime, GPUs are most cost efficient when constantly utilized. While cloud providers typically offer GPU instances with per-second billing cycles, they argue that with their special chips and "paying only for what you use" they offer the highest net cost efficiency for AI workloads. This creates a scenario where businesses will indeed pay just for the AI workloads they need, but still face the increasingly large overhead cost components that used to be easier to hide when the cloud was a smaller industry, which now also includes custom silicon for AI, which each hyperscaler designs and builds out themselves. Although of course some also leverage third parties like NVIDIA as well.

Furthermore, the ever-evolving nature of AI training necessitates a continuous cycle of experimentation and improvement. Businesses are no longer dealing with static applications; whether they realize or not, they're now engaged in a perpetual competitive race to develop and refine better AI models. This ongoing process requires readily available compute resources for training and fine-tuning, pushing IT departments to grapple with the financial implications of constantly running AI workloads in the public cloud. The high bar for entry in terms of infrastructure investment and ongoing operational costs associated with large-scale AI training is creating a fertile ground for alternative solutions.

Cloud Consumption in Generative AI Era - Cost Components of Workloads

The take-away: Any certainty that public cloud was the best place for all AI workloads has greatly receded. CIOs are now considering all their options that yes, still includes the hyperscalers' AI services, but also specialty cloud providers, AI training service bureaus, and private GPU clouds.

Perhaps the most disruptive trend is that this evolving AI landscape is witnessing is the rise of a new class of cloud providers specifically designed to cater to the unique needs of AI workloads. Smaller but more nimble players like Vultr and Paperspace are carving out a niche by offering cloud instances optimized for GPU workloads. These providers often leverage economies of scale by utilizing custom hardware and innovative pricing models that align billing more closely with actual compute usage. Additionally, larger enterprises are increasingly exploring private cloud deployments as a means to maximize control over their AI infrastructure and optimize FinOps (financial management of the cloud), including the burgeoning practice of FinOps for AI. By bringing AI workloads in-house, businesses aim to squeeze every penny out of cost overhead and gain greater flexibility and strategic autonomy in managing their ever-growing compute needs for AI training and operations. This shift towards private and specialized cloud solutions suggests a potential bifurcation within the cloud computing market, with established hyperscalers potentially facing pressure from more targeted, cost-efficient alternatives.

Navigating the AI Cloud Conundrum: A Roadmap for CIOs

The future of cloud computing is undeniably intertwined with the relentless rise of AI. However, for CIOs, this presents a strategic conundrum. Public cloud providers offer unparalleled scalability and access to cutting-edge AI tools, but their cost structures are often ill-suited for always-on, high-performance AI workloads. The path forward necessitates a careful balancing act between agility, cost-efficiency, and control.

Here's an roadmap for CIOs to prepare for this AI-driven cloud future:

  • Invest in Advanced AI Expertise: Building a competent internal team with expertise in AI development, data science, and especially, full stack cloud infrastructure management is now vital. This allows for a deeper understanding of workload requirements and informed infrastructure decisions and internal build out if neeeded. Upskilling for AI is now preferred to hiring in many cases.
  • Hybrid Cloud Strategies: A hybrid cloud approach, leveraging both public and private cloud resources, is now the target environment today, which we saw last year in my research on the rebalancing between public and private cloud. Bursty workloads can reside in the public cloud, while mission-critical, always-on AI workloads can be migrated to a private cloud environment, optimizing cost and performance.
  • Containerization: Containerization technology such as Docker and its robust ML/AI support allows for efficient, rapid packaging and re-deployment of AI models across various cloud environments. This fosters portability and flexibility in choosing the most cost-effective infrastructure for specific workloads. Kubernetes remains popular with larger enterprises, while Docker is favorted by mid-market firms in managing AI deployments.
  • Cost Optimization Tools: Utilize cloud cost management platforms optimized for AI like Cast AI that offer granular insights into resource utilization and spending patterns and cuts costs by up to 50% in some cases. This enables proactive cost management strategies for AI workloads in either the public or private cloud, and allows evaluation of whether private cloud is a more optimal environment for a given AI workload depending on the optimizations needed.
  • Security & Regulatory Considerations: AI workloads in the public cloud raise significant concerns about data security, regulatory and compliance requirements, as well as potential biases. CIOs must implement robust security protocols, conduct thorough risk assessments, and ensure alignment with all relevant regulations. This is another decision point that reflects heavily on the choice between public and private clouds, as private AI deployment can provide consierably more proactive, granular control over data residency and privacy issues with AI.

Competitive Ramifications

The ability to navigate this new landscape will have significant competitive ramifications. Companies that can develop and deploy AI models most cost-effectively and securely will, put simply, gain a significant edge in their industry. Conversely, those struggling to optimize AI workloads for the cloud will fall behind, as their investments just won't get them as far as their cohorts.

The Bottom Line

The future of AI in the cloud demands forward-thinking strategies. As AI workloads reshape the cloud landscape, CIOs are presented with a unique challenge and opportunity. The future demands not just technical expertise in generative AI and large language models, but also a spirit of creative adaptability, a rethinking of cloud orthodoxy, and eagle-eyed vision. Crafting a clear and frequently updated AI roadmap for the enterprise will be crucial to mobilize IT and the business, outlining key decision points and strategic considerations as lessons are learned.

This journey requires continuous learning and the ability to evolve alongside the technology. IT leaders must embrace a growth mindset, exploring diverse deployment models, and remaining open to new possibilities that will be vital for success. Those who are prepared to adapt and learn will not only survive but prosper in this transformative era. The cloud, once a playground for bursty applications, is now evolving into a dynamic ecosystem where 100% load AI workloads reign supreme. For the CIOs who embrace this change with an expansive vision and an open mind as to where AI workloads will best operate over time, the future holds immense promise.

My Related Research

A Roadmap to Generative AI at Work

Spatial Computing and AI: Competing Inflection Points

How to Embark on the Transformation of Work with Artificial Intelligence

AWS re:Invent 2023: Perspectives for the CIO

Dreamforce 2023: Implications for IT and AI Adopters

Video: Moving Beyond Multicloud to Crosscloud

My new IT Strategy Platforms ShortList

My current Digital Transformation Target Platforms ShortList

Private Cloud a Compelling Option for CIOs: Insights from New Research

The Future of Money: Digital Assets in the Cloud

New C-Suite Tech Optimization Chief Information Officer Chief Digital Officer Chief Data Officer Chief Technology Officer

AI regulations, Zoho Case Study, iPaaS | ConstellationTV Episode 75

This week on episode 75 of ConstellationTV: Co-hosts Doug Henschen and Dion Hinchcliffe talk enterprise tech news with Larry Dignan, then catch a Zoho customer interview between Keith Cooper from Bergen Logistics and Dion during #ZohoDay2024 and round out the episode with an explainer segment from Doug on iPaaS integrations.

00:00 - Welcome from our hosts!
01:26 - Enterprise tech news (Broadcom earnings, new AI regulations, Snowflake partnership)
15:40 - Zoho customer interview with Keith Cooper, Bergen Logistics
24:56 - What is an iPaaS? with Doug and Larry
37:13 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/z2tKJZeJTJ0?si=Us0rsWQm-z-H-idj" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

C3 AI highlights C3 AI Vision, roadmap ahead

20240306 C3 Transform 2024 Day One Wrap Up from Constellation Research on Vimeo.

C3 AI outlined its product roadmap for the rest of the year and C3 AI Vision, which will provide spatial representations via data points. C3 AI Vision will cover a bevy of use cases for logistics, sourcing optimization, military, law enforcement, process optimization and aircraft readiness.

Tom Siebel, CEO of C3 AI, outlined C3 AI Vision at Transform 2024 holding an Apple Vision Pro. He had Apple's headset to highlight how spatial computing could ultimately be the user interface for C3 AI Vision. Siebel also said that enterprise software applications needed new interfaces as he compared his old Siebel Systems interfaces with Salesforce, Oracle and SAP to show how little things have changed.

Siebel said C3 AI Vision aims to take a step ahead with enterprise interfaces and vision will be critical for industries like logistics and manufacturing.

"C3 AI Vision can integrate data from any set of data," said Siebel. "The kind of data we need to ingest is drone images, telemetry, and AI models. This requires a new way of thinking. Our world is awash in data and to harness its true potential we must analyze it from every angle and experience AI decision support in a new immersive user experience."

Related: How Baker Hughes used AI, LLMs for ESG materiality assessments | C3 AI launches domain-specific generative AI models, targets industries | C3 AI CEO Tom Siebel: Generative AI enterprise search will have wide impact

Siebel said C3 AI Vision can be used to create digital twins on the fly so enterprises can visualize situations. Apple Vision Pro is in its first iteration and will improve over time and offer enterprises new ways to work with generative AI, AI and data. C3 AI Vision's interface will evolve over time, said Siebel.

C3 AI Vision will be embedded across the platform and applications focused on industries and use cases. 

C3 AI Vision demos were being held at Transform with preview sign-ups available.

Transform, held in Boca Raton, FL, will feature talks from customers such as Holcim, Con Edison, GSK, and US Air Force along with product roadmap discussions.

C3 AI's news at Transform follows a strong fiscal third quarter earnings report that highlighted accelerating revenue growth. The company reported a net loss of 60 cents a share and an adjusted loss of 13 cents a share on revenue of $78.4 million, up 18% from a year ago.

The company said that it closed 50 agreements in the third quarter including 29 new pilots. C3 AI said that its qualified pipeline was up 73% from a year ago. As for the outlook, C3 AI said fourth quarter revenue will be $82 million to $86 million. For fiscal 2024, C3 AI projected revenue of $306 million to $310 million.

The roadmap

Edward Abbo, CTO of C3 AI, walked through demos of what's in the C3 AI platform today and what's to come. Since launching C3 AI Generative AI last year at Transform, the company has deployed 47 customer use cases across multiple industries. US Air Force, Baker Hughes, Boston Scientific and Con Edison are reference customers.

"There's really a wide range of cases. Some of these sit on top of AI applications. Many of these are standalone, and so generative AI can really transform that human computer interaction helping you get insights faster, more effectively," said Abbo.

The architecture for C3 AI is to keep large language models distinct from the data. In a demo, Abbo walked through what's in the platform today.

As for the roadmap, Abbo said C3 AI's summer release will focus on retrieval augmented generation improvements. In the fall, C3 AI will focus on developer support.

The Winter release will revolve around converting intelligence into actions. "We want to focus on enabling actions not just following a workflow, but actually teaching the system how to do specific, complex tasks," said Abbo.

Business value focus

Adrian Rami, Group Vice President, Products & Engineering at C3 AI, said business use cases across industries can drive billions of dollars for oil and gas, manufacturing, healthcare and financial services.

Rami said enterprises need to pick the right use cases with high value, feasibility and repeatability while avoid custom builds. Enterprises need to scale across use cases with one AI, analytics and machine learning foundation. "We recognize what our platform needs to deliver for high value use cases with a common foundational data model," said Rami.

During a keynote, Rami said an enterprise AI application needs a series of tools and features. He also walked through C3 AI's suites for various functions and industries such as supply chain, reliability, sustainability, state and local government, defense and intelligence.

Jim Apostolides, Senior Vice President, Enterprise Operational Excellence at Baker Hughes, outlined how the company scaled C3 AI's platform. Apostolides now has more than 50% of inventory optimization accepted automatically without human intervention at the end of 2023. In 2021, that figure was 3%.

Baker Hughes also used C3 AI for sourcing optimization to improve purchase orders and other processes. Apostolides said Baker Hughes has focused on picking the right use cases and collaborating with C3 AI to drive value. Baker Hughes recently completed an upgrade to the latest C3 AI platform. 

Generative AI embedded throughout platform

Rami said generative AI has the potential to transform the user interface of the C3 AI platform and put data into common business terms. "This will fundamentally change the human-computer interface, and everyone can participate at scale," said Rami.

Jake Whitcomb, Senior Director, Products at C3 AI, walked through advances to make it easier to configure and deploy generative AI. C3 AI will launch a copilot across the platform that will work on multiple products.

The company has drilled down on interfaces with the aim of bringing insights forward quickly without a lot of configurations. C3 AI has launched a quick start feature to onboard data, train models, and configure the interface via C3 AI Studio.

Some screenshots from Whitcomb’s presentation.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

CrowdStrike, Palo Alto Networks duel over platforms vs. bundles

The cybersecurity platform wars may be getting a bit chippy as CrowdStrike and Palo Alto Networks duel as they try to convince enterprises to consolidate on their platforms.

This cybersecurity platform skirmish all started when Palo Alto Networks reported earnings and CEO Nikesh Arora said the company would offer incentives to entice enterprises to consolidate vendors. The catch is that Palo Alto Networks cut its outlook.

Enter CrowdStrike, which delivered strong fourth quarter results and said it would buy Flow Security. CrowdStrike reported fiscal fourth quarter adjusted earnings of 95 cents a share on revenue of $845.3 million. Wall Street was expecting adjusted earnings of 82 cents a share on $839.97 million.

For fiscal 2024, CrowdStrike delivered net income of $89.3 million, or 37 cents a share, on revenue of $3.06 billion, up 36% from a year ago. Adjusted annual earnings were $3.09 a share.

As for the outlook, CrowdStrike was also impressive. It projected first quarter revenue between $902.2 million and $905.8 million with non-GAAP earnings of 89 cents a share to 90 cents a share. For fiscal 2025, CrowdStrike projected revenue of $3.92 billion and $3.99 billion with non-GAAP earnings of $3.77 a share to $3.97 a share.

George Kurtz, CEO of CrowdStrike, had two weeks to prepare for the platform question and came prepared for the earnings conference call. He was so prepared that the word "platform" was mentioned 78 times on the conference call. 

"CrowdStrike is the only single platform, single agent technology in cybersecurity that solves use cases well beyond endpoint protection," said Kurtz, who touted the company's Falcon platform with native AI. Kurtz argued that customers have gone all-in on Falcon and are increasingly adopting more modules. His argument was that enterprises are "leaving stitched together point products and PowerPoint platforms behind."

From there, Kurtz noted that the cybersecurity game is "is a frenetic vendor bazaar." He said:

"Disjointed point feature copycat products clutter the market, attempting to Band-Aid symptoms instead of curing the illness. OS vendors use their market position to create a monoculture of dependence and risk, and in many cases serve as the breach originator. Even worse, multi-platform hardware vendors evangelize their stitched together patchwork of point products masquerading as thinly veiled piecemeal platforms. And what organizations inevitably realize is that vendor lock-in leads to deployment difficulties, skyrocketing costs, and subpar cybersecurity.

The outcome is shelf wear and sunk costs. ELA and bundling addiction become the only way to coax customers into purchasing non-integrated point products."

Kurtz's punchline: The enterprise IT budgets are fine, but the fatigue over bolt-on products is real.

CrowdStrike often takes aim directly at Microsoft and Hurtz noted that "we eliminated multiple Microsoft consoles and multiple agents to a single console, single agent, and single platform of Falcon."

But Palo Alto Networks' platform grab also was an obvious target. Kurtz said:

"A global financial services giant replaced their Palo Alto Prisma Cloud products in a large seven-figure deal. The Palo Alto cloud security products required separate management consoles and separate agents because cloud security is on a separate Palo Alto platform altogether. CrowdStrike was able to deliver an expected 70%-time reduction in management as well as more than $5 million in annual staffing cost savings. The patchwork of multi-product, multi-agent, multi-console, separate platform technologies resulted in visibility gaps, asynchronous alerts, and overall fatigue managing cloud security. Falcon single platform with its integrated cloud security components was a win for the customer."

The big takeaway from Kurtz was that enterprises shouldn't confuse "platformization" with old-fashioned bundling and freebies. He said:

"As you might imagine, I heard a lot about platformization over the last week. To me it's kind of a made-up, but what I believe our competitors are talking about is bundling, discounting, and giving products away for free, which is nothing new in software and security software. It's been done for the last 30 years. We know free isn't free. And what customers are saying is more consoles, more point products masquerading as platforms create fatigue in their environment. We've been focused on is that single agent architecture, single platform, single console that allows us to stop the breach, but more importantly, drive down the operational cost."

Arora, who spoke at the Morgan Stanley Technology, Media & Telecom Conference the same day as CrowdStrike’s earnings report, defended the company's platform play. He said:

"This is totally different than bundling that's what we say it. Bundling is the economic bundle. You say if you buy one thing from me, you buy the second thing, you can buy the third thing for free or as part of the deal, you have to go spend it freely. Cybersecurity works differently because Chief Security Officers and CIOs don't get in trouble buying the best in a category.

It's different because we actually have to prove the value of these things working together because the customer already has the other use case. This is about actually going and demonstrating value from the integration as opposed to creating an economic construct."

Zscaler, the other security platform vendor in this race, had a take that rhymed with CrowdStrike as well as Palo Alto Networks. Zscaler CEO Jay Chaudhry said at the Morgan Stanley conference:

"The word platform has been hijacked just like the word Zero Trust has been hijacked. Platform is supposed to be a common set of services on which you build application A, B and C. It's not supposed to be a collection of acquisitions and labeled under a bundle. That platform is really nothing, but ELAs labeled as platform, which is becoming shelfware. Our philosophy has been platform can give you a good model. ServiceNow has built a good platform that works well together. How many vendors have tried to do a bunch of acquisitions they never came together. It's supposed to be a platform."

Chaudhry said enterprises have collected a bunch of best-of-breed products that don't add up. The way forward will be to consolidate vendors. In the end, 30 to 50 security vendors will be consolidated to a handful. Perhaps the question is less about CrowdStrike vs. Palo Alto Networks vs. Zscaler and more about what security vendors will walk the plank completely and which players (Microsoft and Cisco Systems with Splunk) have an incumbent's advantage. 

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Salesforce launches Einstein 1 Studio, Data Cloud enhancements

Salesforce launched Einstein 1 Studio and the Spring release of Data Cloud as it works to infuse copilots and low-code generative AI tools across its applications and platform.

The news, released at Salesforce's TrailblazerDX developer conference, was touted by CEO Marc Benioff during the company's earnings conference call.

Einstein 1 Studio is a low code set of tools to enable Salesforce customers to customize Einstein Copilot with Copilot Builder, Prompt Builder and Model Builder. Salesforce is aiming to be the trust layer between AI models and enterprises.

From a product perspective, Salesforce is pairing Einstein 1 Studio with its Data Cloud. In Einstein 1 Studio, Copilot Builder, which will have integration with MuleSoft APIs and Salesforce developer tools, is in beta. Prompt Builder is generally available so developers can reuse AI prompts. Model Builder, which allows developers to choose language models or build them, is also generally available.

Salesforce also released an updated version of Data Cloud. Here is the breakdown of some of the Data Cloud additions:

  • Data Spaces, which allows customers to segregate data, metadata and processes, is generally available.
  • Model Builder with a wide choice of LLMs is generally available.
  • Related Lists, Copy Fields and Industries enhancements are generally available.
  • Data Cloud for Financial Services is available.
  • Data Graphs will get tools to define relationships between data points without SQL queries and manual data joints.
  • Service Intelligence is available and uses models to provide insights for services teams.
  • Triggered Flows are enhanced for testing and troubleshooting before activation.

Related:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity salesforce AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer