Results

Unlocking the Power of AI in Customer Experience: Lessons from the Five9 CX Summit

Unlocking the Power of AI in Customer Experience: Lessons from the Five9 CX Summit


At the recent Five9 CX Summit, Liz Miller sat down with #CX expert Nick Delis to discuss the evolution of #AI and customer experience. Here are some of the key takeaways...

🔑 2023 was the "year of failure" as companies experimented with AI, 2024 the "year of learning", and now 2025 is poised to be the "year of execution" - where we see tangible value from AI investments.
🤖 The key is balancing AI automation with empathy and human touch. AI can enhance agent performance and deflect simple queries, but empathy is critical for sensitive customer situations.
🌍 In regions like Latin America and Iberia, the human connection is highly valued. Successful CX strategies need to adapt to local cultural preferences while leveraging the power of technology.
💬 It's not just about the data and metrics - the emotional impact on both customers and agents is crucial. Small gestures can make a big difference.

Watch the full interview!

On CR Conversations <iframe width="560" height="315" src="https://www.youtube.com/embed/qPIR0ZsE0Co?si=PvxJjauDer20iyH_" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AWS launches Quantum Embark program to jump start quantum computing deployments

AWS launches Quantum Embark program to jump start quantum computing deployments

Amazon Web Services launched a services unit to help customers adopt quantum computing. Combined with Amazon Braket, AWS is positioning itself to be a trusted neutral party in quantum computing much as it has done with generative AI.

The cloud giant launched Quantum Embark, an advisory program for customers to get ready for quantum computing. AWS also has Braket, a marketplace of quantum computing services.

AWS' announcement, which was outlined last week ahead of re:Invent, comes as hyperscalers are starting to talk up quantum computing more. In addition, quantum computing players are looking at hybrid applications with supercomputers. The quantum industry has pivoted to current use cases that can deliver value to enterprises. Quantum computing all in on hybrid HPC with classical computing

Quantum Embark includes advisory services that revolve around use case discovery, technical enablement and deep dives for target applications.

AWS cited early customers such as Westpac and Vanguard. Quantum Embark is available within the Amazon Braket console. The AWS spurred a surge in publicly traded quantum computing companies included on the Constellation Research Shortlists for quantum computing.

More: IonQ’s bet on commercial quantum computing working, acquires Quibitekk | IonQ's quantum computing bets: Quantum for LLM training, chemistry and enterprise use cases

 

Constellation Research analyst Holger Mueller said:

"In another sign of quick maturation of quantum computing, AWS launched its quantum evaluation / onboarding program. Cloud infrastructure vendors like AWS have a lot of interest in quantum, as it will be the first computing platform being almost exclusively accessed in the cloud for enterprises. There is a lot of cloud spend around quantum, from data, networking, data prep and error correction to just name a few. With the announcement AWS is positioning itself even more as a Switzerland that allows access to multiple quantum platforms through its Bracket program and easier onboarding and evaluation through Embark."

More quantum computing:

 
Data to Decisions Tech Optimization Innovation & Product-led Growth amazon Quantum Computing Chief Information Officer

BT150 zeitgeist: AI agent questions, SaaS ate the opex and job woes

BT150 zeitgeist: AI agent questions, SaaS ate the opex and job woes

AI agents are going to run into problems with standards, operating budgets are being squeezed by SaaS vendors and the war for talent is going to get interesting.

Here's a look at some of the takeaways from Constellation Research's November BT150 call, which operates under Chatham House rules.

AI agents won't live up to expectations

  • Agent orchestration critical, but there are a lot of loose ends to tie up. AI agents will mean dependencies across platforms and it's unclear how the compatibility between agents will evolve. AI in 2025 will move from the infrastructure layer to the platform. Enterprises will struggle to bundle AI applications together.
  • Vendors are racing to build out their AI agent ecosystems. Salesforce has its Agentforce partner network and Google Cloud's marketplace will now feature agents from third parties. Boomi is pushing AI agent registries.
  • Be wary of these agent ecosystems since they can result in customers locked into platforms.
  • It's quite possible that 2025 will be a building year for AI agent deployments and enterprises will be slow to adopt them. Why? There aren't open standards yet for agent coordination and you'll need those in place to scale.

Also see: The art, ROI and FOMO of 2025 AI budget planning | GenAI's 2025 disconnect: The buildout, business value, user adoption and CxOs | Agentic AI without process optimization, orchestration will flop

Operating expenses squeezed by SaaS

  • Enterprises are running out of operating expenses and SaaS vendor pricing is leaving little for services or implementation. As a result, contracts are moving to bigger deals where services firms may discount and hope AI and automation makes up the difference.
  • CxOs on the call continued a common theme in 2024--they aren't seeing value from their SaaS providers, which increasingly aiming to be the sole data store for customers. Salesforce was cited as a vendor that wanted to be the data store for everything. "I'm like no, we're not locking in and bringing in external data to Salesforce," said one CxO. "The prices are ridiculous and I'm not seeing value."
  • One CxO said they've been aiming to use automation to reduce costs and become more efficient, but the technology is becoming more expensive.
  • Another CxO noted: "We have to budget for annual 10% increases on SaaS renewals but some incumbent vendors are pushing for 30%. That's ridiculous. We're back in legacy land."

Also see: Enterprise software 2025: Three big shifts to watch | Disruption is coming for enterprise software | BT150 zeitgeist: Dear SaaS vendors: Your customers are pissed

More from the BT150 calls:

Macro themes

  • Repatriation of talent has begun. Countries are offering US workers with visas deals to move home, keep their US salary and work in-country for 3- to 5-years. US executives are jumping at these offers in countries like India and Latin American countries.
  • Enterprises are pulling back on campus hiring and not hiring college grads at previous rates. The CxO concern is that this lack of hiring will mean companies will have trouble filling roles requiring four- to five-year experience in the future.
  • M&A is going to boom as well as the IPO market. The bet for 2025 is that IPOs and mergers and acquisitions will ramp with a change in administration in the US.

BT150 interviews:

 

 

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite BT150 Leadership Chief Information Officer Chief Experience Officer

Nvidia CEO Jensen Huang has a dream...

Nvidia CEO Jensen Huang has a dream...

Nvidia CEO Jensen Huang poked holes in all the arguments against the company during the company's third quarter conference call. Concerns about power, costs, LLMs hitting a wall and hyperscale cloud providers digesting all the GPUs already acquired were all brush aside.

Yes folks, Huang has a dream. In this dream, Nvidia demand remains insatiable until $1 trillion worth of data centers are upgraded for the AI age. See: Nvidia strong Q3, sees Hopper, Blackwell shipping in Q4 with some supply constraints

In this dream...

  1. Nvidia platforms continue to make exponential gains that cut costs, keep competition at bay and warrant a premium due to price for performance.
  2. LLMs will continue to scale and improve without a plateau.
  3. Cloud service providers that are accounting for Nvidia's data center growth don't pause to digest existing purchases. “I believe that there will be no digestion until we modernize a trillion dollars with the data centers,” said Huang.
  4. All companies will be in the inference game and generating tokens that add to data to train AI.
  5. AI factories will solve the looming energy and sustainability problems.
  6. And if the GPU growth plateaus, Nvidia can offset with networking, robotics, automotive and quantum.

In many ways, Huang sounded like an NFL coach that listens to sports talk radio, doesn't necessarily admit to tuning in, but aims to rebut fan arguments. Much of what Huang said on Nvidia's earnings call was designed in part to offset concerns that are bubbling up even though the financials remain stellar.

Huang concluded Nvidia's earnings call with the following:

"The age of AI is upon us and it's large and diverse. Nvidia's expertise, scale, and ability to deliver full stack and full infrastructure let us serve the entire multi-trillion dollar AI and robotics opportunities ahead. From every hyperscale cloud, enterprise private cloud to sovereign regional AI clouds, on-prem to industrial edge and robotics."

I ran the Nvidia transcript on OpenAI to assess sentiment and the result was "overwhelmingly positive." The word cloud looked like this.

Let's take on the Huang AI dream and address by key parts.

Costs. Huang has been consistent with its take that Nvidia systems are becoming more efficient and driving total cost of ownership gains.

On the third quarter conference call, cost of compute, training and inference was addressed about 10 times on par with the second quarter and at a Q&A at a Goldman Sachs investment conference in September.

Huang said:

"We're on an annual roadmap and we're expecting to continue to execute on our annual roadmap. And by doing so, we increase the performance, of course, of our platform, but it's also really important to realize that when we're able to increase performance and do so at X factors at a time, we're reducing the cost of training, we're reducing the cost of inferencing, we're reducing the cost of AI so that it could be much more accessible."

My take: For now, Huang isn't wrong. The efficiency gains in Nvidia's software stack and platforms are impressive. However, there will be a point--likely starting in 2025--where good enough will work. It shouldn't be overlooked that all of the hyperscalers are developing their own AI accelerators and diversifying with AMD and others.

LLMs capabilities stalling? Questions about LLMs continuing to scale spurred a dissertation from Huang. After all, if training techniques hit a wall so does the FOMO driving Nvidia sales.

He said:

"Foundation model pre-training scaling is intact and it's continuing. As you know, this is an empirical law, not a fundamental physical law, but the evidence is that it continues to scale. What we're learning, however, is that it's not enough that we've now discovered two other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling."

Huang said OpenAI ChatGPT o1 is an example of how LLMs haven't hit a plateau. "We now have three ways of scaling and we're seeing all three. And because of that, the demand for our infrastructure is really great," he said.

My take: It's unclear whether LLMs will advance at the current pace. Enterprises may also leverage LLMs that are cheaper to train and tailor. Even a yearlong pause in LLM gains could trip up Nvidia's ability to hit already inflated expectations.

Cloud service providers (CSPs) are going to continue to spend like drunken LLM trainers. Huang said:

"All of these CSPs are racing to be first. The engineering that we do with them is, as you know, rather complicated. And the reason for that is because although we build full stack and full infrastructure, we disaggregate all of the AI supercomputer and we integrate it into all of the custom data centers in architectures around the world. That integration process is something we've done for several generations now. We're very good at it, but still, there's still a lot of engineering that happens at this point. But as you see from all of the systems that are being stood up, Blackwell is in great shape."

Nvidia said half of its data center revenue was cloud service providers, but the other half was consumer internet and enterprise. I'm guessing that other half is dominated by Meta.

My take: The argument that CSPs won't pause spending is tenuous. History rhymes and I doubt this time is different as AI infrastructure is built out.

All inference all the time. Huang talked up the agentic AI game, noted AI Enterprise revenue is going to double over the year. Huang said:

"We're seeing inference demand go up. We're seeing inference time scaling go up. We see the number of AI-native companies continue to grow. And of course, we're starting to see enterprise adoption of agentic AI that really is the latest rage. And so, we're seeing a lot of demand coming from a lot of different places."

My take: In Huang's dream, every time you open a PDF or PowerPoint you'll generate tokens at the edge. These inputs will continue to drive models forward. Huang is on target, but we may be debating about the timing of this inference nirvana for years.

Energy and sustainability. Huang said continued efficiency gains from Nvidia systems will alleviate concerns about energy consumption.

Huang said data centers are moving from 10s of megawatts to 100s of megawatts to ultimate gigawatts. "It doesn't really matter how large the data center is, power is limited," he said. "Our annual roadmap reduces cost, but because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues."

My take: The biggest issue here isn't performance per watt, but sourcing the power. The grid is tapped and innovations like small nuclear reactors are years from scaling. Nvidia isn't going to improve performance per watt so much that it'll have no impact on power consumption.

Nvidia has plenty of other innovations on the runway. Huang noted that Nvidia is growing its software, networking, robotics and automotive businesses. He said:

"There's a whole new genre of AI called physical AI. Physical AI understands the physical world and it understands the meaning of the structure and understands what's sensible and what's not. That capability is incredibly valuable for industrial AI and robotics."

My take: The reality is that Nvidia's data center business is carrying the company. However, Nvidia is well positioned for the next big thing--including quantum.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Amazon invests another $4 billion in Anthropic, expands partnership

Amazon invests another $4 billion in Anthropic, expands partnership

Amazon will invest another $4 billion in Anthropic to bring its total investment to $8 billion. Anthropic will also use AWS as its primary training partner and use AWS Trainium and Inferentia processors to deploy its largest large language models.

The latest Amazon investment comes shortly after Anthropic landed a partnership with Snowflake.

Amazon's first investment in Anthropic made the company's Claude family of models a headliner on Amazon Bedrock. The latest Amazon investment makes Anthropic a closer partner.

In a statement, the companies said they will "will continue to work closely to keep advancing Trainium's hardware and software capabilities."

The companies added: "This next phase of the collaboration will even further enhance the already premium performance, security, and privacy Amazon Bedrock provides for customers running Claude models."

As previously noted, the Amazon-Anthropic partnership is notable since OpenAI is tightly aligned with Microsoft. Google also offers Anthropic models and many others in addition to its Gemini family of LLMs. What has emerged is spheres of LLM influence around hyperscale cloud providers. Meta has Llama and is focused on open source options with potential business applications in the future.

AWS ups its investment in Anthropic as giants form spheres of LLM influence

In addition, Anthropic has stood out for its Claude model performance, but also ability to add collaboration and enterprise tools to its LLMs.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Elastic posts Q2 rebound, ups Q3 outlook

Elastic posts Q2 rebound, ups Q3 outlook

Elastic upped its outlook for the third quarter following better-than-expected second quarter results as its plan to extend from search into generative AI paid off.

The second quarter results were a reversal of the first quarter. In the first quarter, Elastic made sales changes that hurt revenue growth and the company cut guidance.

When Elastic reported its second quarter, however, it appears that the worries about customer commitments was overblown. The company reported a second quarter net loss of $25.45 million, or 25 cents a share, on revenue of $365.36 million, up 18% from a year ago. Non-GAAP earnings from Elastic in the second quarter were 59 cents a share, 21 cents a share better than estimates.

As for the outlook, Elastic projected third quarter non-GAAP earnings of 46 cents a share to 48 cents a share compared to estimates of 40 cents a share.

Fiscal 2025 earnings will be between $1.68 a share and $1.72 a share. Full year revenue is projected to be $1.45 billion to $1.46 billion compared to $1.44 billion.

CEO Ash Kulkarni said the company saw wins across Elastic Cloud, which saw second quarter sales growth of 25%. "In Q2 we saw strong customer commitments with key wins across all our solution areas, with continued momentum in GenAI and platform consolidation," said Kulkarni.

Elastic ended the quarter with 21,300 subscribed customers.

The lumpy progression of Elastic--first quarter miss and plunge and second quarter beat and raise--highlights how the company doesn't fit well into any one category. Elastic also said CFO Janesh Moorjani is leaving to pursue another opportunity. Eric Prengel, group vice president of finance, will become interim CFO Dec. 14.

Speaking on the earnings conference call, Kulkarni said the company saw solid sales execution and customer interest. He said:

"After some unexpected disruption in sales performance in Q1, we are now starting to see the benefits of the changes. Our performance in Q2 reaffirms our confidence in our strategy and shows that we are well on our way to returning to the strong pace of sales execution that we have demonstrated in the past."

Kulkarni said customers were consolidating security and observability products and migrating onto the Elastic's SearchAI platform. "We also saw strong demand for our vector database as customers increasingly adopted Elastic as a natural choice for building genAI applications across many different industries and use cases."

Key points from the conference call:

  • Kulkarni said one big win in the company was a multi-year seven figure deal where a company standardized on Elastic's vector database to power more than 30 chatbot clusters.
  • Another customer was a retailer using Elastic for its omnichannel experience.
  • Elastic Express, a migration program for its AI platform, is seeing strong traction and helped the company win in more than 40 deals in the second quarter.
  • The company will weight its investments in genAI features.
  • Elastic saw strength across multiple geographies and the largest enterprises accelerated consumption.

Elastic isn't easily defined

The company's search business is a mainstay, but it also has a monitoring and management business dubbed AutoOps, a security business and is a retrieval augmented generation (RAG) play.

In a September briefing, Elastic executives noted the following about the company's strategy.

  • Elastic's search AI architecture integrates vector embeddings, enabling generative AI use cases like semantic search and RAG.
  • The company positions itself as a leader in AI search with its vector database for applications in security, observability and analytics.
  • Generative AI is seen as a way to redefine Elastic's brand.
  • Elastic is building managed serverless offerings to simplify deployment and scale for customers.
  • And Elastic is betting on cross-cluster and federated search capabilities to handle distributed data environments.

Constellation Research's take

Constellation Research analyst Andy Thurai said:

"Elastic is well positioned in the areas of observability, security, and enterprise search with mature offerings. Especially with 480EB data expected to be produced in 2025 alone, search is a major issue for a lot of enterprises. Elastic has flexible offerings and is very appealing to enterprises that want to keep things local, combined with cloud hosted and serverless offerings.

The new addition of "Search AI Lake," with its millisecond response times, allows searching unstructured data which was almost impossible to search before. The addition of generative AI-powered security playbooks, runbooks, and AI assistants is also appealing to customers. Elastic has finally seemed to have figured out their licensing model, and the tighter relationships with all three hyper scalers - AWS, GCP,  Azure - Elastic seem to be poised for growth."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

BT150 Spotlight: MultiCare Health System's Laurie Wheeler on optimization, change management and healthcare transformation

BT150 Spotlight: MultiCare Health System's Laurie Wheeler on optimization, change management and healthcare transformation

Laurie Wheeler, Chief Operating Officer, Information Services & Technology at MultiCare Health System, is a 25-year healthcare veteran who is all about optimizing processes and the change management needed for transformation.

At Constellation Research's Connected Enterprise, I caught up with Wheeler to talk about her role. Here are the takeaways.

Her role. Wheeler runs the business and operations of the IT organization at MultiCare Health System with a focus on process, finance, budgeting and contracts. "I'm an operator. I'm all about having a smooth machine running internally. I'm about the processes. That's my jam. We live in a world of technology but my big thing is optimizing it," said Wheeler. "I'm the 'now what' person. The CEO Council made a decision and I'll get it done."

At MultiCare Health System, Wheeler is a 25-year vet who has the relationships and the credibility to implement technology. She started as a front desk clerk.

Healthcare and new technologies. When it comes to AI, new platforms and technology Wheeler is happy to have the conversations. "But again, my passion is making these things reality. So, I'm really big on change management," she said. "At healthcare providers, the staff specialty is being at bedside. The last thing they want to think about is technology. Taking care of humans is just inherently different."

Finding project champions. "You need to find those champions in the operation because in healthcare technology is not the specialty. The specialty is taking care of patients so it's about finding a way to digest the technology and make it easy to adopt," said Wheeler. "I've worked in the organization a long time so I've developed a lot of relationships. We go out and meet with our hospital presidents, chief nurses and you find people that have a passion around technology while working in healthcare."

Building credibility with these technology champions is really about the follow through, said Wheeler. "You build relationships, build trust and people notice and call on you," she said.

2025 priorities. Wheeler said her focus going into the new year is optimizing a ServiceNow implementation. "ServiceNow is our employee resource center back end," she said. "We use it for our search, virtual agent and the goal is minimizing calls to the service desk. The problem is we don't have a lot of content. It goes back to making it easier for our healthcare workers to put in information."

ServiceNow is connected to Epic, an electronic health record system. Wheeler added that MultiCare also implemented a full ERP replacement with Workday covering HR, financials and supply chain. "It was quite an adventure," she said.

The healthcare technology dream. When asked what Wheeler would optimize if she had a magic wand, she said:

"It's the usability and functionality of Epic at the bedside. The last thing you want folks to do is messing around with your health record when they should be focusing on you."

In the last year, Wheeler piloted Nuance's Dax Copilot, which is ambient technology that aims to input information via voice and transcription. Nuance is now owned by Microsoft.

She said:

"I had an experience with my daughter, and we went to the doctor. He was a test subject, and he's like, let me put this on next to us, and it's going to record everything. It's going to put it in your record. They chat and things like that. When it's over I'm stoked because we're just piloting Dax and physicians like it. My daughter said it was very odd because the doctor looked at her the whole time. She has grown up only seeing a physician stare at a desktop to talk to her."

The future of ambient healthcare. Wheeler said AI will have a big role in healthcare to update records and handle various tasks. "The trick will be figuring out where to insert the human in quality control," she said. "We'll move that way because the experience is better and changes the workflow with the patient. The last thing a nurse wants to do on a break is spend five minutes on the service desk or record. The returns are time and customer experience."

More interviews:

Next-Generation Customer Experience New C-Suite Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization BT150 Leadership AR Chief Information Officer Chief Experience Officer

Palo Alto Networks delivers strong Q1, says industry on platformization bandwagon now

Palo Alto Networks delivers strong Q1, says industry on platformization bandwagon now

Palo Alto Networks reported better-than-expected first quarter results and said that it is benefiting from enterprises looking to consolidate cybersecurity vendors. Palo Alto Networks focused on a "platformization" strategy designed to land more wallet share on its platform.

The company reported earnings of $350.7 million, or 99 cents a share, on revenue of $2.1 billion, up 14% from a year ago. Non-GAAP earnings were $1.56 a share.

Wall Street was expecting Palo Alto Networks to report earnings of $1.47 a share in the October quarter on revenue of $2.12 billion.

Palo Alto Networks also announced a 2-for-1 stock split effective Dec. 12.

As for the outlook, Palo Alto Networks said its second quarter revenue will be between $2.22 billion to $2.25 billion, up 12% to 14% from a year ago. Non-GAAP earnings will bet $1.54 a share to $1.56 a share.

For fiscal 2025, Palo Alto Networks projected revenue of $9.12 billion to $9.17 billion, up 14% from the previous year. Non-GAAP earnings will be between $6.26 a share to $6.39 a share.

Speaking on a conference call, CEO Nikesh Arora made the following points:

  • "The market for cybersecurity continues to be robust and continues to grow faster than the overall technology market. We saw particular strength in our next generation security offerings."
  • "Our industry peers have been evangelizing platformization. Imitation is the highest form of flattery."
  • "Our approach is to ingest all relevant security data, analyze this with precision AI technology and natively automate end-to-end workflows. It's a tall order to take data from many different security vendors, analyze it on the fly and make a decision to stop an attack faster, but we're encouraged with the early success of our cloud platform."
  • "We feel the cybersecurity industry is embarking into its next phase, but the market will continue to converge towards a fewer set of platformization players over the next five to 10 years. Point solutions will continue to get subsumed in these platform plays."

Constellation Research's take

Constellation Research analyst Chirag Mehta said:

"Palo Alto Networks' Q1 FY25 results reflect their conviction and commitment to platformization, driving 40% YoY growth in NGS ARR. The inclusion of QRadar SaaS contracts, acquired from IBM, contributed to this ARR growth, though the company anticipates transitioning these customers to XSIAM solutions in the coming quarters.

This strategy signifies a broader market trend where enterprises are gravitating towards integrated platforms to streamline network security and security operations while reducing total cost of ownership. However, as evident from ongoing hesitations around vendor lock-in, this transition presents challenges for customers. Converting QRadar customers to XSIAM (~10% so far) underscores Palo Alto's potential to redefine security operations through telemetry-driven insights.

For customers, the focus on XSIAM as a replacement for traditional SIEM systems represents an opportunity to modernize their SecOps. As CISOs aim to balance risk reduction with operational efficiency, this quarter's results highlight Palo Alto Networks' ability to lead in a consolidating market while signaling the need for clarity and choice in platform commitments."
 

Data to Decisions Digital Safety, Privacy & Cybersecurity I am Team Leader at the Nominee Organization (no vendor self nominations) Distillation Aftershots Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Nvidia strong Q3, sees Hopper, Blackwell shipping in Q4 with some supply constraints

Nvidia strong Q3, sees Hopper, Blackwell shipping in Q4 with some supply constraints

Nvidia reported a better-than-expected third quarter, raised its outlook and said that Blackwell shipments will begin in the fourth quarter. Data center revenue in the third quarter was up 112% from a year ago.

Collette Kress, CFO of Nvidia, said:

"We completed a successful mask change for Blackwell, our next Data Center architecture, that improved production yields. Blackwell production shipments are scheduled to begin in the fourth quarter of fiscal 2025 and will continue to ramp into fiscal 2026. We will be shipping both Hopper and Blackwell systems in the fourth quarter of fiscal 2025 and beyond. Both Hopper and Blackwell systems have certain supply constraints, and the demand for Blackwell is expected to exceed supply for several quarters in fiscal 2026."

The company reported third quarter earnings of $19.3 billion, or 78 cents a share, on revenue of $35.08 billion, up 94% from a year ago. Non-GAAP earnings in the quarter were 81 cents a share.

Wall Street was looking for third quarter earnings of 75 cents a share on revenue of $33.14 billion.

Jensen Huang, CEO of Nvidia, said "demand for Hopper and anticipation for Blackwell — in full production — are incredible as foundation model makers scale pretraining, post-training and inference."

As for the outlook, Nvidia said fourth quarter revenue will be about $37.5 billion, give or take 2%. Analysts were modeling fourth quarter earnings of 82 cents a share on revenue of $37.03 billion.

By the numbers for the third quarter:

  • Nvidia data center revenue was $30.8 billion, up 112% from a year ago. The company gained as multiple cloud servers launched Nvidia Hopper H200 instances in the quarter.
  • Cloud providers were 50% of data center revenue with the remainder being consumer Internet companies and enterprises.
  • Networking revenue was $3.1 billion, up 20% from a year ago.
  • Gaming and PC revenue was $3.33 billion, up 15% from a year ago.
  • Visualization revenue was $486 million, up 17% from a year ago.
  • Automotive and robotics revenue was $449 million, up 30% from a year ago.

More:

Key points from the Nvidia conference call:

  • "Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale," said Kress. 
  • Costs matter. "Nvidia Blackwell architecture with NVLink switch enables up to 30x faster inference performance and a new level of inference, scaling, throughput and response time that is excellent for running new reasoning inference," said Kress, who noted that it takes 64 Blackwell GPUs to deliver the compute of 250 H100s. 
  • Nvidia AI Enterprise revenue is double what it was last year with a large pipeline. Kress put annual revenue at $1.5 billion. 
  • China will "remain very competitive" as a market. 
  • Foundation model scaling is intact. Huang said that "the evidence is that LLMs can continue to scale." However, the industry is learning that there are more efficient ways to scale such as post training, reinforcement learning, and synthetic data. OpenAI's Strawberry model is a version of test time scaling. "We now have three ways of scaling, and we're seeing all three ways of scaling. And as a result of that the demand for our infrastructures is really great," said Huang.
  • Huang shot down concerns about Blackwell ramping or issues with overheating in data centers. He also said that concerns about industry indigestion are overblown. Huang said:

"I believe that there will be no digestion until we modernize a trillion dollars of data centers." 

Constellation Research analyst Holger Mueller said:

"Nvidia continues to let the good times roll, almost doubling its revenue compared a year ago. To show the scale of the financial acceleration – Nvidia quarterly net income of $19.3B is more than Nvidia nine month total earnings of the last fiscal year ($17.5 bilion). That is unheard of growth and acceleration. Questions were handled well by Jensen Huang and team – especially on the critical supply chain side. Should the company be able to deliver, the next quarter growth will be also an easy exercise. Speaking about the other division – the long announced and even longer expected growth spurt for automotive may have arrived, with 30% revenue growth. Automotive revenue isn't meaningful, but still a positive sign."

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Agentic AI, Healthcare Tech, Science of Consulting | ConstellationTV Episode 93

Agentic AI, Healthcare Tech, Science of Consulting | ConstellationTV Episode 93

ConstellationTV Ep. 93 is a must-watch for anyone interested in the latest developments in enterprise AI. Co-hosts Martin Schneider and Larry Dignan cover the latest enterprise tech news (challenges and opportunities around agentic AI adoption and the importance of demonstrating clear business value). 

 

Next, hear from Laurie A. Wheeler, COO of IST at MultiCare Health System, who unpacks the implementation process with new technology in healthcare, including optimizing ServiceNow and leveraging AI to improve physician-patient interactions.

 

Then, R "Ray" Wang talks with Mohamad Ali, Head of IBM Consulting, about the "Science of Consulting" and how IBM is integrating generativeAI and digital workers to transform the consulting experience for clients.


00:00 - Meet the hosts
01:13 - Enterprise tech news updates
11:15 - Interview with Laurie Wheeler, COO of IST at MultiCare Health System
19:00 - Interview with Mohamad Ali, Head of IBM Consulting
34:20 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/IxiRgdJWWLs?si=YbnlIWwuCYBGN95-" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>