Results

Google Cloud, KPMG outline lessons learned from Gemini Enterprise deployments

KPMG is both a partner and a customer for Google Cloud and that dual role is honing methodologies, use cases and approaches for AI agent deployments.

On a webinar for analysts, Google Cloud and KPMG walked through the early lessons learned from deploying Gemini Enterprise.

At Google Cloud Next in April, KPMG said it would expand its AI partnership with Google Cloud. KPMG said it would use Google Cloud to scale its multi-agent platforms to transform business processes and integrate Gemini Enterprise to boost internal productivity.

Specifically, KPMG is leveraging Gemini Enterprise and Vertex AI with other services. Google Cloud is also being used to build AI capabilities and agents for KPMG Law US.

Stephen Chase, Global Head of AI & Digital Innovation at KPMG, said the firm adopted Gemini Enterprise across the workforce with 90% of employees accessing the system within two weeks of launch. "We believe this is the fastest adopted technology our firm has had and we are in a regulated industry," said Chase. "We went into it with the idea this was going to be part of our overall transformation. It was never about individual use cases. It was about sparking innovation."

KPMG and Google Cloud teamed up on the Gemini Enterprise deployment to hone best practices for regulated deployments.

Hayete Gallot, President of Customer Experience for Google Cloud's global, multi-billion-dollar commercial business, said scaling AI agents is about building repeatable processes and methodologies to scale.

"Beyond the models, it's really about how you're going to build those multi-agent systems," said Gallot. "We've done a lot of work to help our customers from the learnings we've had in building those multi agents. We've packaged that through our ADK (Agent Development Kit) so they can build their own agents."

Gallot added that Google Cloud is investing in the ecosystem and partners so customers can scale agentic AI. She said that Gemini Enterprise is an example or providing pre-built agents for coding and research while giving the leeway to build and connect to other AI agents.

"The more the ecosystem is on a common set of tools and protocols, the better it is to build those multi agent experiences," said Gallot.

KPMG's internal Gemini Enterprise deployment

Chase walked through the early lessons from the KPMG internal adoption of Gemini Enterprise. Among the key points:

Understand the data and regulatory issues and take a measured approach. Chase said KPMG had a good data foundation and understanding of the regulatory issues. "We took a measured approach to rolling it out and testing," said Chase. "We were doing the evaluations on what we were seeing versus what we thought we might see. Were we getting the right data and responses? Was the connector delivering back what we expected with the right controls? We spent a lot of time testing upfront."

Co-innovation. Chase said Google Cloud and KPMG engineers worked together on operating agents in the consulting firm's security environment. "We were helping actually shape how agents are built in Google Enterprise and used that to build trust in AI in our transformation program," said Chase.

Use cases. Chase said the first problem KPMG was trying to solve--and it's critical in a services firm--was enterprise search. "I need good answers and I need to get them right now," said Chase. "Solving that problem was one of the reasons people gravitated to the system."

NotebookLM as a go-to tool. Chase said NotebookLM got a lot of play internally and about 11,000 notebooks have been shared after a month and a half. Gemini Enterprise's Deep Research AI agent is also getting a lot of usage.

Data quality is everything. Chase said KPMG also worked through data quality issues to make sure responses returned were correct and kept client confidential information private.

Beware of AI sprawl. Chase said one of the things plaguing AI deployments is that they're installing more AI than people can consume.

Client facing deployments

KPMG is also deploying Gemini Enterprise at its enterprise accounts.

Chase said KPMG is looking to take its best practices and make them available broadly to clients.

"Ultimately, clients will share the agents they build with each other. And some of those will be industrialized," said Chase.

Once agents are industrialized they can be distributed and "spark innovation at the edge and core and everything else we're doing," said Chase. "That's what our clients are really interested in."

The other key item in Gemini Enterprise deployments is that it's a horizontal system that "fits really nicely in a heterogeneous environment," said Chase.

"Gemini Enterprise doesn't have to be in a monolithic environment," added Chase.

Gallot said that Google Cloud has revamped its technical teams to be hands on and focus on methodology. "We're building a lot of consultative capability in our front end so our people can spark ideas with customers. We have developed a methodology to help our customers to go from idea to production," she said. "It's technology, methodology, catalog and people."

Enterprises are currently looking for knowledge in agentic AI deployments. Chase said the key issues for clients are:

  • Data security and broader cybersecurity.
  • Data management.
  • Use cases. "We have a dedicated process that we go through to pull use cases from both client work and what we're doing internally," said Chase.

For KPMG, the next step after collecting use cases for processes such as finance, procurement and various operating areas, say consumer lending at a bank, is to create reusable starter kits.

"We're all headed toward orchestrating agents and what we're working on now is the building blocks to get us there," said Chase.

These building blocks are then shared across KPMG's tax, audit and advisory service lines. Every client will have different circumstances, but KPMG's goal is to have common areas that can be adapted. Sharing those lessons will make it easier to generate returns.

"We get a lot of questions in the enterprise and if they're going to invest we need to help demystify AI agents and share lessons," said Chase.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity Google ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer CEO Chief Executive Officer CIO CTO Chief Technology Officer CAIO Chief AI Officer CDAO Chief Data Officer CAO Chief Analytics Officer CISO Chief Information Security Officer CPO Chief Product Officer

A look at the intersection of AI and customer experience

Artificial intelligence and customer experience are a common intersection on earnings conference calls as enterprises. Companies are looking to connect the dots between lifetime value of a customer, driving revenue and hybrid approaches that meld technology and humans.

Here's a look at some of the CX efforts detailed in recent days.

Uber: Lifetime experience

Uber is on track to support about 14 billion rides in 2025, but the goal is to drive cross platform usage and engage consumers over a long period. Think lifetime experience over lifetime value even though the two categories are closely related.

Note the nuance of lifetime experience messaging from Uber CEO Dara Khosrowshahi. Lifetime value of a customer is a common metric that revolves around the total predicted revenue a company can get from an entire relationship. LTV is transactional.

Lifetime experience is a view that Uber can go from providing rides to multiple services over time. In theory, lifetime experience could be more valuable and lead to deeper customer relationships.

"At its core, Uber is a trips machine built to make rides and deliveries happen affordably at scale," explained Khosrowshahi. "While an exceptional trip experience will always be core to who we are, we’re now expanding our focus beyond the next trip—to consumers' entire lifetime experience with Uber. Taking this lifetime view means thinking more holistically about how people engage across our platform—sometimes making investments that may reduce short-term results but strengthen long-term loyalty, or prioritizing actions that benefit the platform overall, even if one business line bears an immediate cost."

Khosrowshahi said Uber One is one program designed to encourage cross-platform engagement. Consumers who engage across Uber's services have 35% higher retention rates and spend three times as much as those who don't.

Lifetime experience also accounts for new services Uber may add in the future. Today, it has a small base of consumers using multiple services, but you could play out a scenario where rides, delivery and maybe healthcare are delivered over a lifetime. On the flip side, Uber is seeing 9.4 million gig workers work across the platform for rides and delivery. Most Uber workers are focused on one task.

Khosrowshahi said: "Over the coming years, we will change both by converting couriers to drivers and vice-versa, and by further extending our flexible earnings model beyond rides and delivery. For example, we recently announced that we will be piloting digital tasks in the Uber Driver app, powered by Uber AI Solutions. The pilot will give drivers more ways to earn during downtime by completing tasks like uploading or tagging photos to help train AI models. Our ambitions here are much larger, and you will see us lean into this opportunity in the years ahead."

Takeaway: Consider lifetime experience efforts to drive traditional lifetime value of a customer.

Match: Growth depends on experience flywheel

Match Group CEO Spencer Rascoff said the company is leveraging AI across its brands, notably Tinder, Hinge and Match, to reboot growth with experiences that lead to outcomes--and presumably more revenue. Match is planning to launch a revamped Tinder in the Spring.

Rascoff refers to the turnaround as a reset, revitalize and resurgence. The reset is complete and the latter two parts are underway.

"We believe our business model thrives when user outcomes improve. Better outcomes, driven by higher quality experiences, better matches and more meaningful connections, build confidence in our product and drive new users through positive word of mouth. User success builds trust in the category and in Match Group's apps," said Rascoff. "By getting the user experience right, we will further deliver real success stories, which we use in marketing to amplify growth by driving new user acquisition and reactivations. Our marketing strategy, especially at Tinder and Hinge, is focused on fueling category consideration bringing in new and lapsed users through product-led storytelling that reflects real experiences happening across our brands."

Match estimates there are 250 million actively dating singles worldwide not currently on dating apps. Match is looking to reengage 30 million lapsed users and attract 220 million first timers. However, Match has a Gen Z problem. Enter a series of AI efforts with many of them revolving around trust and authenticity on the platform.

For instance:

  • Tinder will get Chemistry, an AI-driven interactive matching feature that learns users from via questions and with permission learns from their camera roll to understand interests and personality. Chemistry is designed to combat "swipe fatigue" and surface a few highly relevant profiles each day. The feature is live in New Zealand and Australia.
  • Hinge has AI-first features including Conversation starters, which are personalized prompts for first message. The tool has resulted in 10% more likes with comments and stronger engagement.
  • Tinder's Face Check feature verifies that users are real and match their profile photos. It will roll out in the US and is required for new users in California, Colombia, Canada, India, Australia and Southeast Asia. "We have seen a 60% reduction in user views of profiles later identified as bad actors, and a 40% decrease in reports of bad actor activity," said Rascoff.

What was notable in Match's third quarter call is how experience experiments on the interface and new features have hampered revenue as well as user growth.

Takeaway: The lesson from Rascoff appears to be to play the long game with experience.

Hinge Health: Physical therapy experiences

Hinge Health sits at the intersection of digital health with its network of physical therapists. The challenge is providing experiences that are "about the elegant unification of digital and in-person care," said Hinge Health President James Pursley.

The company is leveraging AI to provide a digital PT experience with a hybrid approach that brings in humans when needed. Daniel Perez, CEO of Hinge Health, said the company is focused on multiple AI efforts that impact experiences. Hinge Health's third quarter revenue was $154 million, up 53% from a year ago.

"Everything we do is centered around the triple aim, using technology to transform outcomes, experience and costs in health care," said Perez.

AI experience efforts include:

  • Robin, Hinge Health's AI care assistant, provides movement analysis. Robin is a 24/7 companion and when someone has a pain flare up, the AI assistant can gather data and details and alert physical therapists so care can be delivered faster. In the near future, Robin will be able to provide instant support and proactively check in with members.
  • Hinge Health is using proprietary TrueMotion Vision technology to analyze movements. TrueMotion Vision captures joint angles, symmetry and endurance across a battery of movements. That data is combined with targeted questions to assess joint health.
  • The company has leveraged AI internally to be more efficient on developing product features. Perez said the focus is on developer experiences. AI adoption is close to 100% and "we've seen a 32% improvement in developer experience scores from April through October," said Perez.

Takeaway: AI and automation improves experience, but the option for human touch matters to bring it home.

Comcast: Integrated approach

Comcast knows it has to improve its customer experience in the long run. Technology integration and AI will play a big role.

Speaking on Comcast's third quarter earnings call, Comcast President Michael Cavanaugh said the cable provider is using AI to self-optimize network performance and its own WiFi gateway to offer seamless performance.

"We're taking meaningful steps to simplify the customer experience across all channels. Our new AI engine now supports agents, technicians and customers through assisted chat, phone, our website and our AI-enabled Xfinity Assistant platform," said Cavanaugh. "We also launched a program that connects customers to a live agent in seconds, which is now available to half of our customer base. It's still early, but we're moving fast and executing with focus towards a simpler, smarter and more seamless customer experience."

Takeaway: Comcast sees tech support, ease of installation and customer service on the same continuum.

More CX:

Data to Decisions Innovation & Product-led Growth Marketing Transformation Next-Generation Customer Experience Chief Customer Officer Chief Data Officer Chief Information Officer Chief Marketing Officer Chief People Officer Chief Revenue Officer Chief Technology Officer

Can We Still Trust What’s Real? Leadership in the AI Age | DisrupTV Ep. 417

Can We Still Trust What’s Real? Leadership in the AI Age | DisrupTV Ep. 417

In this week’s episode of DisrupTV, hosts Vala Afshar and R “Ray” Wang sit down with global leaders Dr. David Bray, Sue Gordon, and Barry O’Sullivan to explore how artificial intelligence is reshaping leadership, ethics, and decision-making in a fast-moving world.

The New Era of AI-Driven Leadership

The rapid acceleration of AI is changing how leaders think, decide, and act — and DisrupTV Episode 417 brings together some of the world’s most experienced voices to discuss how to lead effectively in this environment.

David Bray, known for his work in global change leadership, Sue Gordon, former Principal Deputy Director of National Intelligence, and Barry O’Sullivan, international AI and ethics expert, share powerful insights into what it means to lead with vision, trust, and adaptability as AI becomes a central force in every sector.

From government intelligence to enterprise innovation, these experts agree on one thing: the future belongs to leaders who can embrace AI’s potential without losing sight of the human element.

Leadership, Trust, and the Power of Letting Go

Sue Gordon highlighted that true leadership requires both adaptability and trust. Leaders must empower their teams, delegate responsibility, and resist the instinct to control every outcome.

She noted that in high-stakes environments like the CIA, success often depends on a leader’s ability to trust the judgment of others while maintaining clarity of vision. This “shared responsibility model” helps organizations move faster and respond better to complex challenges — a lesson that applies as much to startups as to intelligence agencies.

Barry O’Sullivan added that leaders must also set realistic expectations around AI. The technology can dramatically improve efficiency and decision-making, but it’s not a silver bullet. Recognizing AI’s limitations and maintaining transparency about its risks is essential for sustainable success.

AI, Ethics, and the Future of Decision-Making

David Bray discussed the next evolution of AI in government and enterprise — from predictive analytics to agentic AI capable of autonomous decision-making.

He shared how AI tools are already being used to amplify leadership intent, streamline collaboration, and even offer feedback on communication effectiveness. But he also warned that leaders must remain aware of their own biases and blind spots, ensuring AI becomes a tool for clarity, not confusion.

The discussion also touched on AI ethics, with panelists emphasizing that the next wave of innovation will require leaders to balance creativity, risk, and responsibility. As Bray put it, the goal isn’t to replace human leadership but to augment it with intelligence that empowers better choices.

Key Takeaways

  • AI demands adaptive leadership. Leaders must be open to learning, iterating, and delegating.
  • Trust is non-negotiable. Empowering teams builds speed, creativity, and resilience.
  • AI is powerful, but not perfect. Transparency about risks and limits fosters credibility.
  • Leadership is evolving. The most effective leaders will blend data-driven insights with emotional intelligence.
  • Self-awareness is a superpower. Understanding one’s biases and blind spots is essential in an AI-driven world.

Final Thoughts: Innovation Starts Within

As AI continues to evolve, leadership is being redefined — not by titles or hierarchies, but by vision, empathy, and adaptability.

Episode 417 of DisrupTV challenges today’s executives to think beyond automation and efficiency. The real question is: How will leaders use AI to enhance humanity — not just productivity?

From the intelligence community to the enterprise boardroom, the message is clear: the future of leadership lies in trust, transparency, and technological literacy.

🎧 Watch or listen to DisrupTV Episode 417 for the full conversation with David Bray, Sue Gordon, and Barry O’Sullivan — and discover how the next generation of leaders is preparing for the AI era.

Related Episodes

If you found Episode 417 valuable, here are a few others that align in theme or extend similar conversations:

 

New C-Suite Future of Work Tech Optimization Chief Executive Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/L6kEoUyE3Ao?si=ppw2RLXvFraHOFPB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Virgin Voyages: Lessons learned from scaling Google Gemini Enterprise AI agents

Nathan Rosenberg, Chief Brand & Marketing Officer at Virgin Voyages, said his company has increased open rates to 30% on email marketing with click through rates of 20% since deploying Gemini Enterprise AI agents working with his copywriting team.

Virgin Voyages was cited as one of the flagship customers of Google Cloud's Gemini Enterprise when it was launched in October. Virgin Voyages said it deployed more than 50 specialized agents on Gemini Enterprise and has more on tap.

Speaking on a webinar for analysts, Rosenberg said "Email Ellie," the first agent deployed on Gemini Enterprise, combines the knowhow of Virgin Voyages creative team with hyper-personalized marketing outreach. The AI agent is trained on internal brand frameworks and automates Virgin Voyages tone, which is cheeky much like Rosenberg. In addition, Email Ellie has cut campaign copy creation time by 40%.

"At Virgin globally, we're very focused on human experiences and our people," said Rosenberg, who noted that Virgin owner Richard Branson consistently says that "if you actually take care of your people, they will take care of your customers, and your customers will basically deliver the results."

"The most interesting thing in the relationship with Google is that they're a clever group of people who are very techy, and we're a human-centered organization," explained Rosenberg. "There's this perfect blend that says this isn't about the technology. Don't get me wrong. It's really helpful for us, but we don't start the conversation about technology. We start the conversation with what is the problem we're trying to solve, and how do we really understand what the customers want, and how do we deliver that?"

Rosenberg, who quips he barely knows how to use a copier, said a meeting with Google Cloud to talk Vertex AI and Gemini made it clear there's potential for his teams. "I hate the phrase of AI native, because it's really AI supporting," said Rosenberg. "But we have changed our entire organization. The advantage is when your people start to understand how it frees them up from the day-to-day drudgery and allows them to deliver incredible experiences."

While the Virgin Voyages buildout with Gemini Enterprise still in progress, Rosenberg had a set of lessons learned. Here's a look:

  • Think about outcomes more than saving money. "The problem with the AI conversation is that it is always about saving money or reducing headcount," said Rosenberg. "That's not what it's about. Rather than reducing our creative headcount we increased it. We've realized the tools are allowing us to scale. I have to keep going to my CFO and say I need more people because that's where the work is really being delivered. Understand what AI can do for you and how it can humanize contact more than you realize."
  • 50 AI coworkers. Rosenberg said his set of AI agents are viewed as coworkers that can take away the tasks that eat up human time. He said Virgin Voyages is using Gemini Enterprise to surface terms and conditions and ship changes to free up creative teams.
  • Frameworks matter. Rosenberg said Gemini Enterprise's guardrails and frameworks enable his team to focus. If a framework effectively eliminates distractions and prioritizes work then there's a structure creative teams can scale. "At first it was chaotic because some of us never worked with agents. We weren't sure what to do with them, but with manifested agents in a structure the team was blown away in a good way," said Rosenberg. "I don't tell the team what to build or you should solve this problem. They are working out what agent they want to partner with and naming it.”
  • Cultural returns. Yes, Virgin Voyages has hard returns, but one cultural benefit is Rosenberg's teams have more time to focus on efficacy of campaigns in a way they couldn't just 7 months ago. His team is looking at synthetic personas, asking questions and testing content with probabilistic scoring. "When they come to present work to me, I can't win the argument anymore because it's been tested," he said.
  • Ownership. Rosenberg said departments within companies should take ownership of AI and the tools it has. "As a marketer, it is the most exciting time ever to be in marketing, because this revolutionary AI is owned by marketing and sales more than technology people. The tech folks are there to help make the dreams come true," said Rosenberg.

Rosenberg said Virgin Voyages plans to scale its set of AI agents. "It is working so much so the copywriting team ended up producing at least 15 new agents to help them on a range of different things that are based on the incredible experience," he said. "What I love for our business is that AI isn't about cost cutting. It's about driving revenue and growth through mass personalization at scale."

More:

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Leadership VR Chief Information Officer Chief Marketing Officer CIO CTO Chief Technology Officer CISO Chief Information Security Officer CDO Chief Data Officer CEO Chief Executive Officer CAIO Chief AI Officer CDAO CAO Chief Analytics Officer CPO Chief Product Officer

Google Cloud's Ironwood ready for general availability

Google Cloud said its seventh generation Tensor Processing Unit (TPU), known as Ironwood, will be generally available soon as the company also outlined new Arm-based Axion instances.

The announcement highlights how hyperscalers, primarily Google Cloud and Amazon Web Services, are deploying custom chips for AI workloads to diversify from Nvidia and smooth out price performance ratios. Ironwood was announced at Google Cloud Next earlier this year.

AWS fired up its massive Project Rainier complex for Anthropic and then lands OpenAI, which is immediately procuring GPUs from AWS. AWS will announce Trainium3, which will feature a big performance boost, at re:Invent 2025 in December.

With that backdrop, Google Cloud, which is already playing with a custom processor lead, struck with Ironwood. In a blog post, Google Cloud noted that its latest TPUs are designed for what it calls "the age of inference." The adoption of AI agents will require optimization and strong price performance.

Google Cloud, which counts OpenAI and Anthropic as customers, announced the following:

  • Ironwood general availability with 10x peak performance over TPU v5p. The processor has 4x performance per chip for training and inference relative to TPU v6e, or Trillium.
  • Anthropic will be a user of Ironwood instances.
  • Axion instances. Google Cloud announced N4A, a cost effective virtual machine, is now in preview. N4A offers 2x better price performance compared to current generation x86 virtual machines. Axion is based on Arm's Neoverse CPUs.
  • C4A metal, which is Google Cloud's first Arm bare metal instance, will be in preview soon.
  • Google Cloud is using Ironwood TPUs as a key layer of its AI Hypercomputer, which will scale up to 9,216 chips in a superpod.

The upshot is that the AI inference market is going to be much more competitive than the training market, which is dominated by Nvidia. Custom silicon, AMD, Intel and Qualcomm will all be in the mix.

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data Chief Information Officer CIO CTO Chief Technology Officer CISO Chief Information Security Officer CDO Chief Data Officer

Qualcomm CEO Amon: AI will be all about generating tokens for least amount of power

Qualcomm is looking to ride the next phase of AI infrastructure--the transition from training to AI inference.

The company, which recently launched AI accelerators for data centers, fleshed out a few details about the plan on its fourth quarter earnings call. Qualcomm is planning to lay out more about its AI and data center strategy at an upcoming event in early 2026.

CEO Cristiano Amon said Qualcomm's acquisition of Alphawave was a part of a broader effort to diversify. "We are incredibly excited about the size of the opportunity in the next phase of data center build-out where there's going to be real competition as we go from training to inference," said Amon.

Amon added:

"We have one very strategic asset in the industry, which is very competitive, power-efficient CPU. That is both for the head node of AI clusters as well as general purpose compute. And then we also have been building what we think is a new architecture dedicated for inference.

I think it's all going to be about generating the most amount of tokens with the least amount of power, and that's our right to play."

Qualcomm's point is already playing out. If you consider what AWS is doing with its upcoming Trainium3 chip and Google Cloud's TPU the key phrases are often performance per watt and leveraging commodity chips that scale.

In addition, AMD and Intel are also eyeing inference as larger AI markets that will arise. Nvidia, best known for horsepower and training, often notes that its GPUs are also used for inference workloads.

Amon said Qualcomm is in discussions with hyperscalers and designing its AI200, AI250 and all the parts that go with it. More details will be outlined early in 2026.

Key points about Qualcomm's AI accelerators:

  • Data center product revenue is projected to ramp in fiscal 2028, but the HUMAIN engagement is likely to pull sales forward to fiscal 2027, said Amon.
  • Qualcomm is getting interest in its data center efforts. The biggest reason? Power constraints. "We're thinking about what the future architecture should look like. We've thought about this for the edge as well, which means dedicated inferencing clusters," said Amon. "The goal is to have the highest possible compute density at the lowest possible cost and energy consumption to generate tokens. There may be an architecture beyond the GPU."
  • For now, Qualcomm is walking a line between saying too much and too little about its AI data center plans. Luckily for Qualcomm, the core business is doing fine.

Qualcomm's fourth quarter earnings and revenue topped expectations as did its first quarter outlook. Qualcomm reported non-GAAP earnings of $3 a share on revenue of $11.27 billion. Wall Street was expecting fourth quarter non-GAAP earnings of $2.87 a share on revenue of $10.76 billion.

In the fourth quarter, Qualcomm's handset revenue was up 14%, automotive up 17% and IoT up 7%.

For fiscal 2025, Qualcomm reported net income of $5.01 a share on revenue of $44.28 billion, up 14% from a year ago. Qualcomm's fourth quarter and fiscal 2025 results included a non-cash charge of $5.7 billion due to tax law changes. Qualcomm had to establish a valuation allowance against its deferred tax assets.

As for the outlook, Qualcomm projected first quarter revenue of $11.8 billion and $12.6 billion with non-GAAP earnings of $3.30 a share to $3.50 a share. The company saw strength in its chips for smartphones, notably Android premium devices, and combined automotive and IoT fiscal year revenue jumped 27%.

Data to Decisions Tech Optimization Chief Information Officer

OpenAI touts enterprise mojo with 1 million business customers

OpenAI isn't about to cede all the enterprise fun to Anthropic, which is viewed as the LLM for business play.

In a blog post, OpenAI touted its enterprise customer base and customers such as Amgen, Commonwealth Bank, Booking.com, Cisco, Lowe’s, Morgan Stanley, T-Mobile and Target. The disclosure is well timed given that investors are starting to question OpenAI's ability to pay for the compute it is procuring in multiple deals.

The company counts a business customer as enterprises that pay for OpenAI for business use as well as those that use ChatGPT for Work and consumption through its developer platform.

OpenAI's approach rhymes with how Apple (and Google for that matter) entered the enterprise. Gain a groundswell of consumer adoption and those workers bring those tools to work.

According to OpenAI, ChatGPT's business impact is accelerating.

  • ChatGPT for Work has more than 7 million seats, up 40% in 2 months.
  • ChatGPT Enterprise seats are up 9x.
  • The company is also expanding its roster of connectors to corporate knowledge bases.

OpenAI also said future upside will come from businesses that will want to build agentic workflows on OpenAI.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer CEO Chief Executive Officer CIO CTO Chief Technology Officer CAIO Chief AI Officer CDAO Chief Data Officer CAO Chief Analytics Officer CISO Chief Information Security Officer CPO Chief Product Officer

Freshworks delivers strong Q3, ups outlook, targets business teams

Freshworks reported better-than-expected third quarter earnings and upped its outlook for the fourth quarter as the company is expanding wallet share. The company is also aiming to land more business users.

The company reported a third quarter net loss of $7.5 million, or 2 cents a share, on revenue of $215.1 million, up 15% from a year ago. Non-GAAP earnings were 16 cents a share.

Wall Street was expecting Freshworks to report non-GAAP third quarter earnings of 13 cents a share on revenue of $208.8 million.

Key figures include:

  • Freshworks had 24,377 customers contributing more than $5,000 in annual recurring revenue.
  • Freddy AI doubled annual recurring revenue from a year ago to more than $20 million.
  • ARR for Freshservice beyond the IT department is growing as Freshservice for business teams has doubled year over year.

Freshworks will launch a standalone version of FreshService for Business Teams, which won't require the broader platform. The standalone enterprise service management product, aimed at legal, HR, finance and facilities, currently has an annual run rate of $35 million, double from a year ago.

As for the outlook, Freshworks upped its outlook. The company projected non-GAAP fourth quarter earnings of 10 cents a share to 12 cents a share on revenue of $217 million to $220 million. For 2025, Freshworks is projecting non-GAAP earnings of 62 cents a share to 64 cents a share on revenue of $833.1 million to $836.1 million.

In the long run, Freshworks is gunning to be a rule of 40 company with revenue growth in the mid-teens consistently.

Freshworks recently held its investor day where it noted that upmarket demand in the mid-market and enterprise has been growing revenue share. Nevertheless, Freshworks faces tough competition in employee experience as well as customer experience.

Here's the employee experience landscape.

And here's the customer experience landscape.

We caught up with Freshworks CEO Dennis Woodside to talk shop. Here are the key points.

Competitive landscape. Woodside said "we're competing in a 20,000 person company like Seagate. They don't have a large set of resources to throw at an ITSM platform. They want faster time to value." In ITSM, Freshworks' primary competition is ServiceNow in larger accounts and Atlassian in developer led companies.

AI strategy. Woodside said next week at Freshworks Refresh the company will launch four pre-built AI agents for industries. The company already has AI agents for customer support, a Copilot for agent productivity and AI insights for management.

CX. Our CX business has not made the up-market shift as aggressively as our IT business. It will be over time, but right now, it's still more of an SMB-centered business," said Woodside. The CX business is growing at 7% to 8% while ITSM is growing at 20% to 23% clip. Half of the top accounts buy ITSM and CX.

Customer sentiment. Woodside said: "CIOs are just trying to figure out how they can possibly support all of these new AI point solutions, and they're going back to what they already have, looking for AI embedded in their existing solutions." CIOs are also looking for alternatives to large vendors as well as ways to consolidate AI tools within existing systems of record.

 

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Chief Information Officer

Quantinuum launches Helios quantum computer, touts fidelity, enterprise customers

Quantinuum launched its new Helios quantum computer, a high-performance general purpose commercial system with 98 fully connected qubits and fidelity north of 99.9%.

The launch is aimed squarely at enterprises looking to deploy quantum computing for certain use cases. Indeed, Amgen, BlueQubit, BMW Group, JPMorgan Chase and SoftBank are initial customers pursuing biologics, fuel cell catalysts, financial analytics and organic materials.

Quantinuum said it has also signed a strategic partnership with Singapore’s National Quantum Office (NQO) and National Quantum Computing Hub (NQCH). The deal provides access to Helios as well as a R&D center in Singapore.

Helios includes a first-of-its-kind real-time control engine with a software stack that gives developers the ability to program similar to the way they program classical computers. Helios also includes Guppy, which is a Python-based programming language for hybrid quantum and classical compute.

Quantinuum said Helios is available through Quantinuum's cloud service as well as on-premise. Dr. Rajeeb Hazra, President and CEO of Quantinuum, said "for the first time enterprises can access a highly accurate general purpose quantum computer to drive real world impact, transforming how industries innovate – from drug discovery to finance to advanced materials."

According to the company, Helios has the ability to enhance generative AI with quantum generated data. Those use cases could include data analysis, material design and quantum chemistry. Quantinuum said it expanded its partnership with Nvidia to integrate Nvidia GB200 AI accelerators with Helios via NVQLink. In addition, Quantinuum will switch to Nvidia accelerated computing for Helios and future systems, using Quantinuum Guppy alongside the Nvidia's CUDA-Q platform to perform real-time error correction critical to its roadmap.

Quantinuum also said it is launching two new programs to develop an ecosystem for quantum computing. Q-Net is a user group that will spur collaboration with customers and a startup partner program to develop third-party applications on Helios.

Constellation Research received a briefing on Helios from Dr. David Hayes, Director of Computational Design and Theory at Quantinuum. Here are the key points:

  • Unprecedented Quantum Performance: "We really do believe Helios has the highest fidelity machine in the world at this scale. It’s almost 100 cubits. We got close. It's 98 and that first number there is the two qubit gate fidelity, 99.92%,” said Hayes.
  • Breakthrough in Quantum Error Correction: Hayes said Helios reached an efficient error correction ratio. "We get to 48 [logical qubits], but even that, I think, will be surprising to people out there. We didn't quite get to 94 in this case, but we didn't quite get to a two to one encoding ratio for error correction,” he said.
  • Practical Scientific Applications: Hayes said Helios successfully modeled a high-temperature superconductor to demonstrate that quantum computers are moving beyond theoretical demonstrations to real scientific research.
  • Quantum Programming Environment. Helios includes Guppy. "Guppy was designed from the go-to make fault-tolerant programming really, really user friendly,” said Hayes. "It's Python based to make it easy to use, but it's a lot more performant than Python."
  • Future Development and AI Integration. Hayes said Quantinuum is exploring the intersections between quantum computing and AI. "We're using AI in the lab to create new quantum circuits, more efficient quantum circuits, and we can have AI kind of fill in the gaps," said Hayes.
Data to Decisions Innovation & Product-led Growth Tech Optimization Quantum Computing Chief Information Officer

Market Trends, Sales Force Automation, and AI Fluency | CRTV Episode 117

📢 ConstellationTV Episode 117 just dropped! This week dives into the hottest topics in enterprise technology...

🔹 [00:18] Hear the latest in AI agents infrastructure. From the great GPU "land grab" to groundbreaking deals by AWS, Microsoft, and OpenAI, CR analysts unpack how #tech giants and disruptors are reshaping the market.

🔹 [11:16] Martin Schneider shares findings on SAP’s next-gen salesforce automation—unveiling advances in loyalty management, customer engagement, and AI-powered revenue intelligence.

🔹[15:22] Learn how TD SYNNEX is building an “AI fluent” workforce and transforming distribution through agentic automation, innovation, and strategic change management. 2025 BT150 executive Kristie Grinnell shares more in an interview with Larry Dignan.

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/184JT2j8JkY?si=l4kAJXS7BGOH80Ov" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>