Results

Anthropic to spend $50 billion on AI infrastructure via Fluidstack partnership

Anthropic to spend $50 billion on AI infrastructure via Fluidstack partnership

Anthropic said it will invest $50 billion in building its own AI data centers in a partnership with Fluidstack. The first data centers will be built in New York and Texas with more sites on deck.

The move comes after Anthropic announced it would use Google Cloud TPUs as well as AWS Trainium2 supercluster. Anthropic also uses Nvidia processors. The multi-cloud and multi-GPU approach was differentiated relative to OpenAI's spending spree on operating its own data center. Now Anthropic has decided that it has to roll its own AI infrastructure too.

According to Anthropic, the Fluidstack partnership will focus on custom-built infrastructure designed for the large language model provider's workloads and R&D.

Like most announcements covering AI infrastructure, Anthropic was sure to mention the project will create 800 permanent jobs and 2,400 construction jobs and play into US AI leadership. The data centers will power up throughout 2026.

Dario Amodei, CEO of Anthropic, said the company is getting closer to AI that can accelerate scientific discovery and solve complex problems. "These sites will help us build more capable AI systems that can drive those breakthroughs," he said.

For Fluidstack, the deal with Anthropic is a big win. Fluidstack counts Meta, Nvidia, Samsung, Dell, Honeywell and others as core customers.

Holger Mueller, an analyst at Constellation Research, said: "Clearly, Anthropic is charting a different course compared to OpenAI - the question is - what is the price for the flexibility? That is - how much does the portability need for Anthropic.  Hopefully it's not only a cost arbitration game."

Data to Decisions Tech Optimization Chief Information Officer

IBM launches IBM Quantum Nighthawk processor

IBM launches IBM Quantum Nighthawk processor

IBM launched its most advanced quantum processor, IBM Quantum Nighthawk, and announced its IBM Quantum Loom, an experimental processor that demonstrates all of the processor components for fault-tolerant quantum computing.

Big Blue announced the roadmap additions at its annual quantum developer forum.

The news from IBM lands as Quantinuum launched its Helios system and Google highlighted its advances. In addition, pure play quantum computing companies have been able to build up their balance sheets as they develop systems.

For IBM, the goal is to deliver quantum advantage by the end of 2026 and fault-tolerant quantum computing by 2029. IBM has offered frequent updates about its quantum computing roadmap with two in 2025.

Here's a look at the key announcements from IBM.

IBM Quantum Nighthawk is designed to complement the company's quantum computing software stack and architecture to deliver quantum advantage. IBM Quantum Nighthawk will be delivered by the end of 2025.

Key points:

  • The quantum processor will be 120 qubits linked together with 218 tunable couplers. Nighthawk will have more than 20% more couplers compared to IBM Quantum Heron.
  • Nighthawk will be able to execute circuits with 30% more complexity than Heron with low error rates.
  • IBM's latest architecture gives users the ability to explore more demanding problems that require up to 5,000 two-qubits gates.
  • By the end of 2026, IBM said IBM Quantum Nighthawk will deliver up to 7,500 gates and up to 10,000 gates in 2027.
  • By 2028, Nighthawk systems could support up to 15,000 two-qubit gates with more than 1,000 connected qubits extended through long-range couplers.

Quantum advantage will be reached by the end of 2026 and be verified by the broader ecosystem. IBM contributed three experiments for quantum advantage to be verified by the broader ecosystem.

Qiskit, IBM's quantum computing software, will get a new execution model to enable fine grain control and a C-API for HPC-accelerated error mitigation.

IBM will deliver a C++ interface to Qiskit to help developers bridge HPC and quantum computing. By 2027, IBM noted that it will extend Qiskit with computational libraries for machine learning and optimization.

The company also said that it will move toward a large-scale fault-tolerant quantum computer by 2029. The effort will be led by IBM Quantum Loon, an experimental processor. Key items for IBM Quantum Loon:

  • Loon has a new architecture to implement and scale components for high-efficiency quantum error correction.
  • IBM has proven it is possible to use classical computing hardware to accurately decode errors in real-time (less than 480 nanoseconds) using qLDPC codes. That ability will be coupled with Loom to scale high-fidelity superconducting qubits.

The company said that it will scale its 300mm quantum wafer fabrication in the Albany NanoTech Complex in New York. The lab will be used to expand its quantum processor development and wafer manufacturing.

Data to Decisions Tech Optimization Innovation & Product-led Growth IBM Quantum Computing Chief Information Officer

AMD sees big growth over next 3 to 5 years, AI boom continuing

AMD sees big growth over next 3 to 5 years, AI boom continuing

AMD projected compound annual revenue growth rates of 35% over the next three- to five years and said demand for AI infrastructure and its chip portfolio is strong.

CEO Lisa Su said during AMD's investor day that the pace of AI infrastructure spending and pace of change is higher than she's ever seen before. "We see a tremendous opportunity ahead to deliver sustainable, industry-leading growth," said Su.

Su in a question and answer session noted that that compound annual growth rate may be front loaded over the three- to five-year time horizon. "We're giving a three to five year TAM and the outer years have a little bit less visibility than the near term years," said Su. "We would expect the near term years to grow faster than 80%."

AMD is seeing strong interest in its AI accelerators. "There is a desire for significant amount of compute. We are working with the supply chain today to make sure that we have the broad ability to support all the compute that's required," said Su.

Here's a look at long-term growth targets over the next three to five years.

  • AMD sees non-GAAP earnings topping $20 a share with non-GAAP operating margins of more than 35%.
  • AMD's data center business will grow at a 60% compound annual growth (CAGR) rate with 10% for its PC and gaming and embedded units.
  • The company sees its EPYC CPU server chip portfolio gaining more than 50% market share. In data center AI, AMD sees CAGR of more than 80%.
  • PC market share will top 40%.

On the product front, AMD executives outlined the AMD Instinct roadmap including Helios systems with AMD Instinct MI450 Series GPUs followed by the MI500 in 2027.

The company also touted its next-gen Venice CPUs and AI networking offering for scale-up and scale-out workloads.

Su was asked about the risk to AI infrastructure spending, notably how much of it has to be funded by OpenAI. Su said AMD "is quite disciplined how we plan these things" and that the company is "comfortable that we know how to do it."

She added that the companies that are funding AI infrastructure such as Google, Microsoft and AWS are well funded. There's also sovereign nations spending heavily. Su said:

"All of the other large hyperscalers who are talking about raising their forecasts are extremely well funded. Their balance sheets are really strong, and the fact that they are choosing to invest more in AI should be a good indicator to the audience that they see value in it."

Regarding OpenAI, AMD's Su said:

"The reason that we are so forward leaning on this is it is great for us in terms of just the amount of learning that we get from engaging at gigawatt scale with a customer that's on the bleeding edge of foundational models. We're doing this in a very structured way. This is a very unique moment in AI and we shouldn't be short sighted. If the AI usage grows as much as we expect there's going to be plenty of financing."

Data to Decisions Tech Optimization AMD Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

QAD | Redzone acquires Kavida.ai to add procurement AI agents

QAD | Redzone acquires Kavida.ai to add procurement AI agents

QAD | Redzone said it has acquired Kavida.ai in a move that will bring AI agents to its procurement and supply chain workflows.

Terms of the deal weren't disclosed.

For QAD | Redzone, the Kavida.ai purchase will accelerate its Champion AI roadmap. Kavida.ai's knowhow in procurement agent training and inbox-to-ERP automation will be integrated into the company's portfolio. QAD | Redzone consists of three core interconnected offerings:

  • QAD, an ERP system focused on midmarket manufacturers.
  • Champion AI, a set of AI tools that works across the platform to enable the manufacturing workforce.
  • Redzone, a system to bring data, AI and automation tools for frontline workers to speed up decisions.

Kavida.ai will bring procurement automation agents to Champion AI with the aim of freeing up about half of a buyer's workday by eliminating manual post order and supplier collaboration work. The Kavida.ai PO, RFQ, and Sales agents will become immediately available to QAD | Redzone customers and Kavida.ai's founders, Anam Rahman and Sumit Sinha, will assume leadership roles.

Rahman and Sinha said in a blog post that the company was founded nearly five years ago to address a big issue in manufacturing--many enterprises run on email and spreadsheets.

According to QAD | Redzone, the plan is to add Kavida.ai's procurement agents to its platform to drive manufacturer productivity.

Sanjay Brahmawar, CEO of QAD | Redzone, AI needs to deliver value quickly. "By integrating Kavida.ai’s technology and team, we’re helping our customers unlock value faster — automating critical workflows, improving supply-chain reliability, and giving every buyer, planner, and supplier a powerful digital co-pilot," he said.

Here’s a look at the flow of a Kavida.ai agent.


Data to Decisions Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Leadership VR Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

CoreWeave's great AI infrastructure race

CoreWeave's great AI infrastructure race

CoreWeave said it is dealing with ongoing supply chain issues as demand far exceeds capacity and revenue expected in the fourth quarter will slip to the first quarter. Nevertheless, CoreWeave's bet is that self-building its AI infrastructure will be a winning strategy in the future.

Michael Intrator, CEO of CoreWeave, said on the company's third quarter earnings call that there are multiple delays but the biggest issue is at the powered-shell level. Powered shell refers to a facility where the power and exterior are completed, but the interior isn't finished.

"There's plenty of power right now, and we believe that there will be ample power for the next couple of years. But really the challenge is the powered shell," said Intrator.

CoreWeave's third quarter had a bevy of moving parts to consider and also reflected emerging skepticism about capital expenditures for AI infrastructure. Although AWS, Alphabet and Microsoft all said capital spending would continue to surge for AI infrastructure, Wall Street openly questioned Meta's plans. In Meta's case, it could simply be a case of metaverse traumatic stress disorder, but the focus of AI spending is turning to returns.

Consider the following for CoreWeave milestones in what is a frenetic pace of scaling:

  • In the third quarter, CoreWeave had revenue of $1.4 billion, up 134%.
  • Revenue backlog at the end of the third quarter was $55 billion.
  • CoreWeave's will deliver more than 1 gigawatt of contracted capacity to customers within the next 12 to 24 months.
  • The company landed third quarter compute contracts with Meta and OpenAI.
  • A planned merger with Core Scientific is officially off and Intrator said the price was simply too high.
  • CoreWeave is diversifying its stack with the acquisition of OpenPipe, which is a platform for training AI agents, and Marimo, a developer workflow company. CoreWeave also acquired Monolith for an industrial AI play.
  • The company launched a unit to land US government customers and added Jon Jones, an AWS alum, as its first chief revenue officer.
  • CoreWeave also launched AI Object Storage, which optimizes the storage layer for AI workloads. CoreWeave's storage platform has topped $100 million in annual recurring revenue.

However, CoreWeave's buildout comes at a price. In the third quarter, CoreWeave delivered a net loss of $110.1 million, an improvement on the $360 million net loss a year ago. Net interest expense in the third quarter was $310.55 million, up from $104.4 million a year ago. Operating income margin was 4% compared to 20% a year ago.

CoreWeave is obviously betting that if it builds the infrastructure customers will come. "AI adoption is progressing beyond the frontier AI labs and hyperscalers. Broader global demand and our recent large wins are driving diversification of our revenue base," said Intrator, who noted customer wins including CrowdStrike, Rakuten and NASA.

Jones, who was the head of startups and venture capital at AWS, will look to add AI natives that will grow with CoreWeave.

No CoreWeave customer in the third quarter represented more than 35% of the company's revenue backlog. The customer base is still concentrated, but well below the 85% level at the start of 2025. Sixty percent of CoreWeave's revenue backlog is with investment grade customers.

The race

In many ways, CoreWeave symbolizes much of the AI infrastructure market in that there's a race between investor patience and scaling amid fears that overcapacity may loom.

Intrator said supply chain issues may be a risk. "While we are experiencing relentless demand for our platform, data center developers across the industry are also enduring unprecedented pressure across supply chains. In our case, we are affected by temporary delays related to a third-party data center developer who is behind schedule. This impacts fourth quarter expectations," he said.

The customer affected by the current delays agreed to adjust the delivery schedule and extend the expiration date.

CoreWeave said 2025 revenue will be between $5.05 billion to $5.15 billion with adjusted operating income of $690 million to $720 million and more than 850 megawatts of active power.

Nitin Agrawal, CoreWeave CFO, said 2025 capital expenditures will be between $12 billion to $14 billion and the 2026 figure will more than double. Interest expense in 2025 will range from $1.21 billion to $1.25 billion.

Agrawal said:

"In Q4, we will be bringing online some of the largest scale deployment in our company's history. This will have a near-term impact on adjusted operating margin due to the timing difference between when data center costs are first incurred and when we start recognizing revenue.

We expect 2025 interest expense in the range of $1.21 billion to $1.25 billion, driven by increased debt to support our demand-led CapEx growth, partly offset by an increasingly lower cost of capital."

Intrator was asked about CoreWeave's strategy to self-build infrastructure. He said CoreWeave has diversified providers and the ability to self-build data centers makes it a larger player in the supply chain. Intrator added that CoreWeave does work with third-party data center providers, but self-building is "about derisking deliver across the broader portfolio."

"We just look at self-build as an additional piece of the puzzle. It puts us closer to the physical infrastructure. It embeds us deeper into the supply chain around the world so that we have firsthand information," said Intrator. "We just think that you need to be on both sides of this fence in order to be as effective as you can be derisking what is a complicated supply chain environment."

Add it up and CoreWeave is going to be a fascinating business school case study. Is CoreWeave's balance sheet just a pile of debt or growth capital? Can CoreWeave remain differentiated in three to four years? Will CoreWeave build out is AI software stack to play a larger revenue role?

The CoreWeave saga will be a fascinating two- to three-year race. Why? CoreWeave has no debt maturing until 2028.

Constellation Research analyst Holger Mueller said:

"CoreWeave showed outstanding growth with revenue growing 150%+ YoY. It is also showing the skeptics that it is not a money loosing business - as EPS improved year over year. Another quarter like this and CoreWeave should be in the black for Q4 on an adjusted basis. With that demonstrated - the focus needs to shift on CoreWeave managing to keep the growth going with supply chain challenges as it secures capital, delivers data center capacity and runs customer workloads well. At the moment, the first concern with CoreWeave is delivering data centers. We will see if all of this issue is addressed in Q4."
 

Data to Decisions Tech Optimization Chief Executive Officer Chief Information Officer

Google Cloud, KPMG outline lessons learned from Gemini Enterprise deployments

Google Cloud, KPMG outline lessons learned from Gemini Enterprise deployments

KPMG is both a partner and a customer for Google Cloud and that dual role is honing methodologies, use cases and approaches for AI agent deployments.

On a webinar for analysts, Google Cloud and KPMG walked through the early lessons learned from deploying Gemini Enterprise.

At Google Cloud Next in April, KPMG said it would expand its AI partnership with Google Cloud. KPMG said it would use Google Cloud to scale its multi-agent platforms to transform business processes and integrate Gemini Enterprise to boost internal productivity.

Specifically, KPMG is leveraging Gemini Enterprise and Vertex AI with other services. Google Cloud is also being used to build AI capabilities and agents for KPMG Law US.

Stephen Chase, Global Head of AI & Digital Innovation at KPMG, said the firm adopted Gemini Enterprise across the workforce with 90% of employees accessing the system within two weeks of launch. "We believe this is the fastest adopted technology our firm has had and we are in a regulated industry," said Chase. "We went into it with the idea this was going to be part of our overall transformation. It was never about individual use cases. It was about sparking innovation."

KPMG and Google Cloud teamed up on the Gemini Enterprise deployment to hone best practices for regulated deployments.

Hayete Gallot, President of Customer Experience for Google Cloud's global, multi-billion-dollar commercial business, said scaling AI agents is about building repeatable processes and methodologies to scale.

"Beyond the models, it's really about how you're going to build those multi-agent systems," said Gallot. "We've done a lot of work to help our customers from the learnings we've had in building those multi agents. We've packaged that through our ADK (Agent Development Kit) so they can build their own agents."

Gallot added that Google Cloud is investing in the ecosystem and partners so customers can scale agentic AI. She said that Gemini Enterprise is an example or providing pre-built agents for coding and research while giving the leeway to build and connect to other AI agents.

"The more the ecosystem is on a common set of tools and protocols, the better it is to build those multi agent experiences," said Gallot.

KPMG's internal Gemini Enterprise deployment

Chase walked through the early lessons from the KPMG internal adoption of Gemini Enterprise. Among the key points:

Understand the data and regulatory issues and take a measured approach. Chase said KPMG had a good data foundation and understanding of the regulatory issues. "We took a measured approach to rolling it out and testing," said Chase. "We were doing the evaluations on what we were seeing versus what we thought we might see. Were we getting the right data and responses? Was the connector delivering back what we expected with the right controls? We spent a lot of time testing upfront."

Co-innovation. Chase said Google Cloud and KPMG engineers worked together on operating agents in the consulting firm's security environment. "We were helping actually shape how agents are built in Google Enterprise and used that to build trust in AI in our transformation program," said Chase.

Use cases. Chase said the first problem KPMG was trying to solve--and it's critical in a services firm--was enterprise search. "I need good answers and I need to get them right now," said Chase. "Solving that problem was one of the reasons people gravitated to the system."

NotebookLM as a go-to tool. Chase said NotebookLM got a lot of play internally and about 11,000 notebooks have been shared after a month and a half. Gemini Enterprise's Deep Research AI agent is also getting a lot of usage.

Data quality is everything. Chase said KPMG also worked through data quality issues to make sure responses returned were correct and kept client confidential information private.

Beware of AI sprawl. Chase said one of the things plaguing AI deployments is that they're installing more AI than people can consume.

Client facing deployments

KPMG is also deploying Gemini Enterprise at its enterprise accounts.

Chase said KPMG is looking to take its best practices and make them available broadly to clients.

"Ultimately, clients will share the agents they build with each other. And some of those will be industrialized," said Chase.

Once agents are industrialized they can be distributed and "spark innovation at the edge and core and everything else we're doing," said Chase. "That's what our clients are really interested in."

The other key item in Gemini Enterprise deployments is that it's a horizontal system that "fits really nicely in a heterogeneous environment," said Chase.

"Gemini Enterprise doesn't have to be in a monolithic environment," added Chase.

Gallot said that Google Cloud has revamped its technical teams to be hands on and focus on methodology. "We're building a lot of consultative capability in our front end so our people can spark ideas with customers. We have developed a methodology to help our customers to go from idea to production," she said. "It's technology, methodology, catalog and people."

Enterprises are currently looking for knowledge in agentic AI deployments. Chase said the key issues for clients are:

  • Data security and broader cybersecurity.
  • Data management.
  • Use cases. "We have a dedicated process that we go through to pull use cases from both client work and what we're doing internally," said Chase.

For KPMG, the next step after collecting use cases for processes such as finance, procurement and various operating areas, say consumer lending at a bank, is to create reusable starter kits.

"We're all headed toward orchestrating agents and what we're working on now is the building blocks to get us there," said Chase.

These building blocks are then shared across KPMG's tax, audit and advisory service lines. Every client will have different circumstances, but KPMG's goal is to have common areas that can be adapted. Sharing those lessons will make it easier to generate returns.

"We get a lot of questions in the enterprise and if they're going to invest we need to help demystify AI agents and share lessons," said Chase.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity Google ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

A look at the intersection of AI and customer experience

A look at the intersection of AI and customer experience

Artificial intelligence and customer experience are a common intersection on earnings conference calls as enterprises. Companies are looking to connect the dots between lifetime value of a customer, driving revenue and hybrid approaches that meld technology and humans.

Here's a look at some of the CX efforts detailed in recent days.

Uber: Lifetime experience

Uber is on track to support about 14 billion rides in 2025, but the goal is to drive cross platform usage and engage consumers over a long period. Think lifetime experience over lifetime value even though the two categories are closely related.

Note the nuance of lifetime experience messaging from Uber CEO Dara Khosrowshahi. Lifetime value of a customer is a common metric that revolves around the total predicted revenue a company can get from an entire relationship. LTV is transactional.

Lifetime experience is a view that Uber can go from providing rides to multiple services over time. In theory, lifetime experience could be more valuable and lead to deeper customer relationships.

"At its core, Uber is a trips machine built to make rides and deliveries happen affordably at scale," explained Khosrowshahi. "While an exceptional trip experience will always be core to who we are, we’re now expanding our focus beyond the next trip—to consumers' entire lifetime experience with Uber. Taking this lifetime view means thinking more holistically about how people engage across our platform—sometimes making investments that may reduce short-term results but strengthen long-term loyalty, or prioritizing actions that benefit the platform overall, even if one business line bears an immediate cost."

Khosrowshahi said Uber One is one program designed to encourage cross-platform engagement. Consumers who engage across Uber's services have 35% higher retention rates and spend three times as much as those who don't.

Lifetime experience also accounts for new services Uber may add in the future. Today, it has a small base of consumers using multiple services, but you could play out a scenario where rides, delivery and maybe healthcare are delivered over a lifetime. On the flip side, Uber is seeing 9.4 million gig workers work across the platform for rides and delivery. Most Uber workers are focused on one task.

Khosrowshahi said: "Over the coming years, we will change both by converting couriers to drivers and vice-versa, and by further extending our flexible earnings model beyond rides and delivery. For example, we recently announced that we will be piloting digital tasks in the Uber Driver app, powered by Uber AI Solutions. The pilot will give drivers more ways to earn during downtime by completing tasks like uploading or tagging photos to help train AI models. Our ambitions here are much larger, and you will see us lean into this opportunity in the years ahead."

Takeaway: Consider lifetime experience efforts to drive traditional lifetime value of a customer.

Match: Growth depends on experience flywheel

Match Group CEO Spencer Rascoff said the company is leveraging AI across its brands, notably Tinder, Hinge and Match, to reboot growth with experiences that lead to outcomes--and presumably more revenue. Match is planning to launch a revamped Tinder in the Spring.

Rascoff refers to the turnaround as a reset, revitalize and resurgence. The reset is complete and the latter two parts are underway.

"We believe our business model thrives when user outcomes improve. Better outcomes, driven by higher quality experiences, better matches and more meaningful connections, build confidence in our product and drive new users through positive word of mouth. User success builds trust in the category and in Match Group's apps," said Rascoff. "By getting the user experience right, we will further deliver real success stories, which we use in marketing to amplify growth by driving new user acquisition and reactivations. Our marketing strategy, especially at Tinder and Hinge, is focused on fueling category consideration bringing in new and lapsed users through product-led storytelling that reflects real experiences happening across our brands."

Match estimates there are 250 million actively dating singles worldwide not currently on dating apps. Match is looking to reengage 30 million lapsed users and attract 220 million first timers. However, Match has a Gen Z problem. Enter a series of AI efforts with many of them revolving around trust and authenticity on the platform.

For instance:

  • Tinder will get Chemistry, an AI-driven interactive matching feature that learns users from via questions and with permission learns from their camera roll to understand interests and personality. Chemistry is designed to combat "swipe fatigue" and surface a few highly relevant profiles each day. The feature is live in New Zealand and Australia.
  • Hinge has AI-first features including Conversation starters, which are personalized prompts for first message. The tool has resulted in 10% more likes with comments and stronger engagement.
  • Tinder's Face Check feature verifies that users are real and match their profile photos. It will roll out in the US and is required for new users in California, Colombia, Canada, India, Australia and Southeast Asia. "We have seen a 60% reduction in user views of profiles later identified as bad actors, and a 40% decrease in reports of bad actor activity," said Rascoff.

What was notable in Match's third quarter call is how experience experiments on the interface and new features have hampered revenue as well as user growth.

Takeaway: The lesson from Rascoff appears to be to play the long game with experience.

Hinge Health: Physical therapy experiences

Hinge Health sits at the intersection of digital health with its network of physical therapists. The challenge is providing experiences that are "about the elegant unification of digital and in-person care," said Hinge Health President James Pursley.

The company is leveraging AI to provide a digital PT experience with a hybrid approach that brings in humans when needed. Daniel Perez, CEO of Hinge Health, said the company is focused on multiple AI efforts that impact experiences. Hinge Health's third quarter revenue was $154 million, up 53% from a year ago.

"Everything we do is centered around the triple aim, using technology to transform outcomes, experience and costs in health care," said Perez.

AI experience efforts include:

  • Robin, Hinge Health's AI care assistant, provides movement analysis. Robin is a 24/7 companion and when someone has a pain flare up, the AI assistant can gather data and details and alert physical therapists so care can be delivered faster. In the near future, Robin will be able to provide instant support and proactively check in with members.
  • Hinge Health is using proprietary TrueMotion Vision technology to analyze movements. TrueMotion Vision captures joint angles, symmetry and endurance across a battery of movements. That data is combined with targeted questions to assess joint health.
  • The company has leveraged AI internally to be more efficient on developing product features. Perez said the focus is on developer experiences. AI adoption is close to 100% and "we've seen a 32% improvement in developer experience scores from April through October," said Perez.

Takeaway: AI and automation improves experience, but the option for human touch matters to bring it home.

Comcast: Integrated approach

Comcast knows it has to improve its customer experience in the long run. Technology integration and AI will play a big role.

Speaking on Comcast's third quarter earnings call, Comcast President Michael Cavanaugh said the cable provider is using AI to self-optimize network performance and its own WiFi gateway to offer seamless performance.

"We're taking meaningful steps to simplify the customer experience across all channels. Our new AI engine now supports agents, technicians and customers through assisted chat, phone, our website and our AI-enabled Xfinity Assistant platform," said Cavanaugh. "We also launched a program that connects customers to a live agent in seconds, which is now available to half of our customer base. It's still early, but we're moving fast and executing with focus towards a simpler, smarter and more seamless customer experience."

Takeaway: Comcast sees tech support, ease of installation and customer service on the same continuum.

More CX:

Data to Decisions Innovation & Product-led Growth Marketing Transformation Next-Generation Customer Experience Chief Customer Officer Chief Data Officer Chief Information Officer Chief Marketing Officer Chief People Officer Chief Revenue Officer Chief Technology Officer

Can We Still Trust What’s Real? Leadership in the AI Age | DisrupTV Ep. 417

Can We Still Trust What’s Real? Leadership in the AI Age | DisrupTV Ep. 417

Can We Still Trust What’s Real? Leadership in the AI Age | DisrupTV Ep. 417

In this week’s episode of DisrupTV, hosts Vala Afshar and R “Ray” Wang sit down with global leaders Dr. David Bray, Sue Gordon, and Barry O’Sullivan to explore how artificial intelligence is reshaping leadership, ethics, and decision-making in a fast-moving world.

The New Era of AI-Driven Leadership

The rapid acceleration of AI is changing how leaders think, decide, and act — and DisrupTV Episode 417 brings together some of the world’s most experienced voices to discuss how to lead effectively in this environment.

David Bray, known for his work in global change leadership, Sue Gordon, former Principal Deputy Director of National Intelligence, and Barry O’Sullivan, international AI and ethics expert, share powerful insights into what it means to lead with vision, trust, and adaptability as AI becomes a central force in every sector.

From government intelligence to enterprise innovation, these experts agree on one thing: the future belongs to leaders who can embrace AI’s potential without losing sight of the human element.

Leadership, Trust, and the Power of Letting Go

Sue Gordon highlighted that true leadership requires both adaptability and trust. Leaders must empower their teams, delegate responsibility, and resist the instinct to control every outcome.

She noted that in high-stakes environments like the CIA, success often depends on a leader’s ability to trust the judgment of others while maintaining clarity of vision. This “shared responsibility model” helps organizations move faster and respond better to complex challenges — a lesson that applies as much to startups as to intelligence agencies.

Barry O’Sullivan added that leaders must also set realistic expectations around AI. The technology can dramatically improve efficiency and decision-making, but it’s not a silver bullet. Recognizing AI’s limitations and maintaining transparency about its risks is essential for sustainable success.

AI, Ethics, and the Future of Decision-Making

David Bray discussed the next evolution of AI in government and enterprise — from predictive analytics to agentic AI capable of autonomous decision-making.

He shared how AI tools are already being used to amplify leadership intent, streamline collaboration, and even offer feedback on communication effectiveness. But he also warned that leaders must remain aware of their own biases and blind spots, ensuring AI becomes a tool for clarity, not confusion.

The discussion also touched on AI ethics, with panelists emphasizing that the next wave of innovation will require leaders to balance creativity, risk, and responsibility. As Bray put it, the goal isn’t to replace human leadership but to augment it with intelligence that empowers better choices.

Key Takeaways

  • AI demands adaptive leadership. Leaders must be open to learning, iterating, and delegating.
  • Trust is non-negotiable. Empowering teams builds speed, creativity, and resilience.
  • AI is powerful, but not perfect. Transparency about risks and limits fosters credibility.
  • Leadership is evolving. The most effective leaders will blend data-driven insights with emotional intelligence.
  • Self-awareness is a superpower. Understanding one’s biases and blind spots is essential in an AI-driven world.

Final Thoughts: Innovation Starts Within

As AI continues to evolve, leadership is being redefined — not by titles or hierarchies, but by vision, empathy, and adaptability.

Episode 417 of DisrupTV challenges today’s executives to think beyond automation and efficiency. The real question is: How will leaders use AI to enhance humanity — not just productivity?

From the intelligence community to the enterprise boardroom, the message is clear: the future of leadership lies in trust, transparency, and technological literacy.

🎧 Watch or listen to DisrupTV Episode 417 for the full conversation with David Bray, Sue Gordon, and Barry O’Sullivan — and discover how the next generation of leaders is preparing for the AI era.

Related Episodes

If you found Episode 417 valuable, here are a few others that align in theme or extend similar conversations:

 

New C-Suite Future of Work Tech Optimization Chief Executive Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/L6kEoUyE3Ao?si=ppw2RLXvFraHOFPB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Virgin Voyages: Lessons learned from scaling Google Gemini Enterprise AI agents

Virgin Voyages: Lessons learned from scaling Google Gemini Enterprise AI agents

Nathan Rosenberg, Chief Brand & Marketing Officer at Virgin Voyages, said his company has increased open rates to 30% on email marketing with click through rates of 20% since deploying Gemini Enterprise AI agents working with his copywriting team.

Virgin Voyages was cited as one of the flagship customers of Google Cloud's Gemini Enterprise when it was launched in October. Virgin Voyages said it deployed more than 50 specialized agents on Gemini Enterprise and has more on tap.

Speaking on a webinar for analysts, Rosenberg said "Email Ellie," the first agent deployed on Gemini Enterprise, combines the knowhow of Virgin Voyages creative team with hyper-personalized marketing outreach. The AI agent is trained on internal brand frameworks and automates Virgin Voyages tone, which is cheeky much like Rosenberg. In addition, Email Ellie has cut campaign copy creation time by 40%.

"At Virgin globally, we're very focused on human experiences and our people," said Rosenberg, who noted that Virgin owner Richard Branson consistently says that "if you actually take care of your people, they will take care of your customers, and your customers will basically deliver the results."

"The most interesting thing in the relationship with Google is that they're a clever group of people who are very techy, and we're a human-centered organization," explained Rosenberg. "There's this perfect blend that says this isn't about the technology. Don't get me wrong. It's really helpful for us, but we don't start the conversation about technology. We start the conversation with what is the problem we're trying to solve, and how do we really understand what the customers want, and how do we deliver that?"

Rosenberg, who quips he barely knows how to use a copier, said a meeting with Google Cloud to talk Vertex AI and Gemini made it clear there's potential for his teams. "I hate the phrase of AI native, because it's really AI supporting," said Rosenberg. "But we have changed our entire organization. The advantage is when your people start to understand how it frees them up from the day-to-day drudgery and allows them to deliver incredible experiences."

While the Virgin Voyages buildout with Gemini Enterprise still in progress, Rosenberg had a set of lessons learned. Here's a look:

  • Think about outcomes more than saving money. "The problem with the AI conversation is that it is always about saving money or reducing headcount," said Rosenberg. "That's not what it's about. Rather than reducing our creative headcount we increased it. We've realized the tools are allowing us to scale. I have to keep going to my CFO and say I need more people because that's where the work is really being delivered. Understand what AI can do for you and how it can humanize contact more than you realize."
  • 50 AI coworkers. Rosenberg said his set of AI agents are viewed as coworkers that can take away the tasks that eat up human time. He said Virgin Voyages is using Gemini Enterprise to surface terms and conditions and ship changes to free up creative teams.
  • Frameworks matter. Rosenberg said Gemini Enterprise's guardrails and frameworks enable his team to focus. If a framework effectively eliminates distractions and prioritizes work then there's a structure creative teams can scale. "At first it was chaotic because some of us never worked with agents. We weren't sure what to do with them, but with manifested agents in a structure the team was blown away in a good way," said Rosenberg. "I don't tell the team what to build or you should solve this problem. They are working out what agent they want to partner with and naming it.”
  • Cultural returns. Yes, Virgin Voyages has hard returns, but one cultural benefit is Rosenberg's teams have more time to focus on efficacy of campaigns in a way they couldn't just 7 months ago. His team is looking at synthetic personas, asking questions and testing content with probabilistic scoring. "When they come to present work to me, I can't win the argument anymore because it's been tested," he said.
  • Ownership. Rosenberg said departments within companies should take ownership of AI and the tools it has. "As a marketer, it is the most exciting time ever to be in marketing, because this revolutionary AI is owned by marketing and sales more than technology people. The tech folks are there to help make the dreams come true," said Rosenberg.

Rosenberg said Virgin Voyages plans to scale its set of AI agents. "It is working so much so the copywriting team ended up producing at least 15 new agents to help them on a range of different things that are based on the incredible experience," he said. "What I love for our business is that AI isn't about cost cutting. It's about driving revenue and growth through mass personalization at scale."

More:

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Leadership VR Chief Information Officer Chief Marketing Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Cloud's Ironwood ready for general availability

Google Cloud's Ironwood ready for general availability

Google Cloud said its seventh generation Tensor Processing Unit (TPU), known as Ironwood, will be generally available soon as the company also outlined new Arm-based Axion instances.

The announcement highlights how hyperscalers, primarily Google Cloud and Amazon Web Services, are deploying custom chips for AI workloads to diversify from Nvidia and smooth out price performance ratios. Ironwood was announced at Google Cloud Next earlier this year.

AWS fired up its massive Project Rainier complex for Anthropic and then lands OpenAI, which is immediately procuring GPUs from AWS. AWS will announce Trainium3, which will feature a big performance boost, at re:Invent 2025 in December.

With that backdrop, Google Cloud, which is already playing with a custom processor lead, struck with Ironwood. In a blog post, Google Cloud noted that its latest TPUs are designed for what it calls "the age of inference." The adoption of AI agents will require optimization and strong price performance.

Google Cloud, which counts OpenAI and Anthropic as customers, announced the following:

  • Ironwood general availability with 10x peak performance over TPU v5p. The processor has 4x performance per chip for training and inference relative to TPU v6e, or Trillium.
  • Anthropic will be a user of Ironwood instances.
  • Axion instances. Google Cloud announced N4A, a cost effective virtual machine, is now in preview. N4A offers 2x better price performance compared to current generation x86 virtual machines. Axion is based on Arm's Neoverse CPUs.
  • C4A metal, which is Google Cloud's first Arm bare metal instance, will be in preview soon.
  • Google Cloud is using Ironwood TPUs as a key layer of its AI Hypercomputer, which will scale up to 9,216 chips in a superpod.

The upshot is that the AI inference market is going to be much more competitive than the training market, which is dominated by Nvidia. Custom silicon, AMD, Intel and Qualcomm will all be in the mix.

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer