Results

Costco approaching AI in ‘very Costco way’

Costco approaching AI in ‘very Costco way’

Costco is weaving artificial intelligence throughout its operations but doing it in what CEO Ron Vachris calls a "very Costco way."

"AI is also being interwoven into our business where we believe it can strengthen our model," said Vachris. "We're approaching it in a very Costco way, practical, member-focused and grounded in tangible business value."

Indeed, Costco isn't into doing enterprise AI proclamations. In fact, its IT architecture is built on Microsoft Azure with Google Cloud's Vertex AI platform for AI with a heavy dose of proprietary applications. Costco Travel leverages Microsoft Dynamics 365 on Azure. Costco's app is functional but doesn't dazzle you either. That's the beauty of Costco though. The big-box retailer has a cult-like following and a private-label brand Kirkland Signature, not to mention the hot dog and soda combo for $1.50.

Yet that following and experience is why there's so much upside in leveraging enterprise technology and AI.

For Costco, e-commerce and digital efforts can provide a lot of revenue upside. For instance, Costco's e-commerce operations accounted for 7% of net sales in 2025. Including Costco Travel, digital sales were 10% of 2025 revenue. For comparison, Walmart's Sam's Club had e-commerce sales that were more than 27% of revenue for the nine months ended Oct. 31.

Costco said in its annual filing with the Securities and Exchange Commission that it needs to close the digital gap. "We must keep pace with changing member expectations and new developments by our competitors. Our members are increasingly using mobile phones, tablets, computers, and other devices to shop and otherwise interact with us. We are making investments in our websites and mobile applications. If we are unable to make, improve, or develop relevant member-facing technology in a timely manner, our ability to compete and our results of operations could be adversely affected," the company said.

In 2017, Costco said it would build its hybrid cloud operations on Microsoft Azure. Costco is also using Google Cloud's Vertex AI and data cloud services and recent job postings for developers emphasize know-how on the hyperscaler's offerings.

Vachris, speaking on Costco's first quarter earnings call, said:

"This isn't about technology for technology's sake, it's about using technology to strengthen the fundamentals that makes Costco who we are, increasing member loyalty, driving top-line sales and improving efficiency in our operations so that we can bring goods to market at the lowest possible price."

Vachris outlined a few key use cases for Costco. Vachris said the implementation of scanning memberships at entry, the Costco Digital Wallet and pre-scanning small- to medium-sized baskets is leading to better member experience and improved productivity.

"The warehouses that are first to adopt this pre-scan technology have shown checkout speed improvements of up to 20%. And across our U.S. warehouses overall, we achieved record levels of checkout productivity in the final weeks of the quarter," said Vachris.

Costco is also focusing on personalization in its apps and product recommendations. Vachris said Costco is seeing a sales lift due to more relevant product recommendations.

On the back end, Costco is focused on leveraging AI in its inventory systems.

"An early use case has involved integrating AI into our pharmacy inventory system. This system now compares prescription drug pricing across vendors and autonomously and predictively reorders inventory, improving our in-stocks to more than 98%," said Vachris. "This change has played an important role in helping us achieve mid-teen growth in pharmacy scripts filled and has improved margins while lowering prices to our members."

Now Costco is looking to improve its gas business with AI tools. "We're now in the process of deploying AI tools in our gas business, which we expect will improve inventory management and drive incremental sales by ensuring we are always delivering the best value to our members," said Vachris.

Rolling out AI-driven tools at Costco is the result of fundamental changes that have been underway for multiple years. Vachris said:

"Technology and bringing the company along has been a focus for several years. A couple of years ago, we really focused on our fundamental base systems and our core systems behind the scenes that will allow us to build for the future. And so, we're now coming to a fruition where we're starting to see the benefits of that hard work of all the backroom systems that we had to build that are now coming to light and coming to the front phase for our members. We feel that technology is going to be part of the -- big part of our future.

Our mantra is to bring goods to market at the lowest possible price. And we think AI has a great asset to that, and it really can help us become a much better merchant out there."

Indeed, at Constellation Connected Enterprise 2025, Indy Cho, VP Analytics and Data Products, said the retailer is looking to create a flywheel between data, inventory, demand and pricing. "Understanding the demand at a localized level is an incredibly challenging task. Every time you shop that's a demand signal. We get that back to buyers, and we they go through a tremendous amount of analysis to figure out how much product needs to get to the right location," said Cho. "The bar for a higher level of accuracy is absolutely necessary but that doesn't mean we don't stop experimenting."

Productivity gains and expansion

Gary Millerchip, Costco CFO, said the technology investments have led to gains that have held the line on expenses. "These productivity improvements fully offset wage investments and the impact of extended operating hours and would have created positive leverage in the quarter had we not experienced higher health care costs. Central was lower or better by 3 basis points," said Millerchip.

In its first fiscal quarter, Costco delivered revenue of $66 billion, up 8.2% from a year ago. Net income in the quarter was $2 billion, or $4.50 a share. Same-store sales in the first quarter were up 6.4%, comparable tickets were up 3.2% and digitally enabled sales grew 20.5%.

For its fiscal year ended Aug. 31, Costco's gross margins were 11.12% of sales, up from 10.92% in fiscal 2024. Sales, general and administrative expenses as a percentage of sales were 9.25%, up slightly from 9.14% in 2024. For fiscal 2025, Costco reported net income of $8.1 billion, or $18.21 a share, on revenue of $275.23 billion.

In addition, Costco's efficiency push has enabled it to expand into new revenue streams. For instance, Costco has multiple revenue streams that complement its core retail business. The retailer ended the first quarter with 81.4 million paid memberships that form the base of its data flywheel.

Costco has its travel business as well as a retail media unit as well as financial services. In the first quarter, Millerchip noted that Costco's travel business and media unit were tailwinds in its first quarter. These businesses also create a data flywheel that can be leveraged.

"The first priority with personalization is to deliver a better member experience and deliver more targeted relevant messaging so we drive more items in the basket, more visits to the warehouse, more visits online. And as you do those things, it just creates an even more compelling value proposition for our media partners. While we're building and executing on that capability."

Takeaways

  • Costco is a fine example of pragmatic AI and IT.
  • Costco's anti-hype approach to AI is refreshing given the hype-fest today.
  • The real differentiation for Costco is in inventory, pricing and warehouse efficiency.
  • The stack for Costco isn't flashy or bleeding edge but focused on value.
  • Costco's experience and business logic is what's proprietary. Technology is just a means to deliver.
  • But if Costco can simply close the technology gap between it and rivals it has plenty of digital commerce upside to complement its real-life customer experiences.
Data to Decisions Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth B2B B2C CX Customer Experience EX Employee Experience business Marketing eCommerce Supply Chain Growth Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Customer Service Content Management Collaboration M&A Enterprise Service AI Analytics Automation Machine Learning Generative AI Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Growth Officer Chief Marketing Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer Chief Supply Chain Officer

Nvidia launches Nemotron 3 open models to enable multi-agent systems

Nvidia launches Nemotron 3 open models to enable multi-agent systems

Nvidia launched its Nemotron 3 family of open models as it aims to provide an efficient set of large language models that can be used by enterprises to customize and deploy in multi-agent systems.

The company said it is releasing open models, training data and libraries. Nvidia, which doesn't have to worry about monetizing models since it cashes in on GPU sales, is focused on providing tools to build agentic AI systems, which will use multiple LLMs focused on various tasks.

Nvidia is also filling in a major US open model gap. Meta's Llama hasn't been updated as the company has retooled its AI unit and may be focusing on proprietary models.

Nemotron 3 models will come in three sizes--Nano, Super and Ultra. Nemotron 3 Nano provides 4x higher throughput than Nemotron 2 and delivers the most tokens per second for multi-agent systems at scale.

The Nemotron 3 Super and Ultra models use a hybrid latent mixture-of-experts (MoE) architecture.

Key points about the Nemotron 3 models include:

  • Nemotron 3 is aimed at multi-agent use cases with a focus on issues such as context drift and high inference costs.
  • Nvidia argues that the open approach gives enterprise transparency. That transparency will give developers trust to automate workflows.
  • Customization is critical and the open approach enables more specialization.
  • Nemotron 3 Nano is a small, 30-billion-parameter model that activates up to 3 billion parameters at a time. Nemotron 3 Nano is designed for efficiency and tasks including software debugging and content summarization.
  • Nemotron 3 Super is a high-accuracy reasoning model with approximately 100 billion parameters and up to 10 billion active per token. Nemotron 3 Super is designed for multi-agent applications.
  • Nemotron 3 Ultra is a large reasoning engine with about 500 billion parameters and up to 50 billion active per token. Nemotron 3 Ultra is designed for complex AI applications.
  • Super and Ultra use Nvidia's 4-bit NVFP4 training format on the NVIDIA Blackwell architecture.
  • Nvidia released three trillion tokens of new Nemotron pretraining, post-training and reinforcement learning datasets as well as safety datasets.

Nvidia outlined multiple early adopters ranging from Accenture to CrowdStrike to Oracle, Palantir and ServiceNow to name a few.

The game plan for Nvidia is to use Nemotron 3 to give developers options to mix and match open models with proprietary offerings to optimize costs.

Nemotron 3 Nano is available now on Hugging Face and inference service providers including Baseten, DeepInfra, Fireworks, FriendliAI, OpenRouter and Together AI. Nemotron is also available on platforms from Couchbase, DataRobot, H2O.ai, JFrog, Lambda and UiPath. And Nemotron 3 Nano is available on AWS via Amazon Bedrock with availability on Google Cloud, CoreWeave, Crusoe, Microsoft Foundry, Nebius, Nscale and Yotta on deck.

According to Nvidia, Nemotron 3 Super and Ultra will be available in the first half of 2026.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Freshworks acquires FireHydrant, eyes AI-native IT operations management

Freshworks acquires FireHydrant, eyes AI-native IT operations management

Freshworks said it will acquire FireHydrant in a move that will build out its IT service and operations management efforts.

Terms of the deal weren't disclosed.

FireHydrant specializes in AI-driven IT operations management software. The plan for Freshworks is to combine FireHydrant with its ITSM platform. FireHydrant was backed by Menlo Ventures and Salesforce.

Here's a look at FireHydrant's platform documentation. FireHydrant, founded in 2018, offers a complete alerting and incident management system. Using AI, FireHydrant automates manual workflows, continually looks for signals about upcoming problems, standardizes processes, alerts and pages responders and integrates with multiple monitoring systems.

Dennis Woodside, CEO and President of Freshworks, said FireHydrant will accelerate the company's vision of unifying IT and employee experiences. With FireHydrant, Freshworks can move up to compete better with ServiceNow and PagerDuty.

The companies said they will be able to provide a unified AI-native experience that includes:

  • Unified visibility for finding IT problems and fixing them.
  • Fast responses using FireHydrant's AI to summarize incident context and playbooks to deal with them.
  • Proactive IT and asset management.

Freshworks said the FireHydrant purchase will close in the first quarter of 2026.

Data to Decisions Chief Information Officer

Rimini Street’s second act will include heavy dose of agentic AI, UX

Rimini Street’s second act will include heavy dose of agentic AI, UX

Rimini Street's first act took 20 years, but the second one will move much faster as the company aims to layer agentic AI over legacy enterprise resource planning systems. The strategy: Enable enterprises to accelerate their automation and AI plans while relegating reliable yet legacy systems to plumbing.

At its Dec. 3 Analyst and Investor Day, Rimini Street held what could be called a long overdue roadshow. Rimini Street was founded in 2005 with the mission of providing maintenance and support services to ERP customers of Oracle and SAP. The win for customers: Rimini Street could offer maintenance at a lower cost and enable enterprises to put off ERP upgrades.

As you can imagine, Rimini Street's value prop didn't go over well with ERP vendors. Oracle and Rimini Street legal battle started in 2010 and ended July 7 in a settlement. During that 15-year legal battle, Rimini Street went public via a special purpose acquisition company (SPAC) merger in 2017. Yes, Rimini Street SPACed well before it became trendy.

Rimini Street's second act, which will run from 2026 to 2030, includes an AI spin to its traditional ERP services. The company plans to maintain ERP systems, give customers the ability to put off costly upgrades and relegate them to systems of record plumbing. The new UI for these legacy systems will be agentic AI. The company launched Rimini Street Agentic UX, which has been deployed across multiple customers, in a move that aims to abstract away ERP systems by focusing on AI workflows and processes.

In many ways, Rimini Street Agentic UX is the product of a year-old partnership with ServiceNow. Rimini Street and ServiceNow have a broad partnership to use the Now Platform to enable AI agents to run on legacy infrastructure. ServiceNow and Rimini Street formed a partnership in late 2024 designed to move processes forward with AI and now the two companies have 26 joint pilots underway.

Rimini Street CEO Seth Ravin laid out the company's strategy. Rimini Street generates more than $400 million in recurring revenue, serves clients with two-minute response times or less and has a diverse customer base that's more than 50% international.

Ravin's case for Rimini Street revolves around evolving from providing maintenance and support for various enterprise systems to enabling AI. Rimini Street today supports legacy systems such as SAP, Oracle, Dayforce and VMware as well as SaaS applications including Salesforce, Workday, ServiceNow and multiple open-source databases. Going forward, Rimini Street is looking to build on that support and make enterprise AI deliver returns.

"Think of us as AI for the real world. We're the guys who are helping to drive down costs. We're helping to automate processes, streamline businesses and drive #1 problem of every single company we work with, profits," said Ravin.

Ravin argued that boards of directors globally are mandating AI and transformation but also cutting budgets. "How do I make ends meet?" asked Ravin. "CIOs wander out of these meetings punch drunk."

The needle CIOs need to thread is innovation vs. cost cutting. Integration of systems is also a big issue. "We have all these great systems now. The problem is they really don't work together. You've probably heard they're all integrated and there's all these integration tools. It's just not the case. In most organizations, these systems are still very separate," said Ravin, who said every vendor wants customers on the latest release and "CIOs literally cannot do it all."

Ravin estimated that about 9% of the average budget is spent on innovation. "This is a formula for disaster in the long term," he said.

ERP at its technical limits

Rimini Street's Ravin said "we believe that ERP software is reaching its technical limitation."

The company will support ERP for years and decades, but a transition to an AI paradigm is coming. "We believe agentic AI is going to be the downfall of the software we see in the world today and it's happening fast," said Ravin.

Rimini Street Agentic UX is designed to be a simplified window into ERP systems.

And Rimini Street certainly has the installed base to prove its Agentic UX approach will work. Rimini Street manages the ERP systems across automaker Hyundai. The company also serves companies such as Catalyst Brands, which has rolled up companies such Aeropostale, JCPenney, Eddie Bauer, Lucky, Nautica and Brooks Brothers, and KnitWell, a private company that owns eight apparel brands including Ann Taylor, LOFT, Talbots and Chicos. Catalyst and KnitWell each have a handful of ERP systems to roll up. Agentic UX could give these companies an exit ramp to ERP consolidation and upgrades.

Todd Treonze, VP Integration and Corporate Systems at Catalyst Brands, said his company launched a year ago with multiple brands on their own ERP systems. Catalyst Brands consolidated support and maintenance and now plans to build on top of them as systems of records. "I think that there's a really big future opportunity for us here, to reinvest some of the savings we're seeing at the support level and put into the innovation side of the business," said Treonze.

It's a similar story for KnitWell.

"An ERP platform is almost the perfect platform to put agentic AI on top of it. It is well structured. The data underneath is also well structured. And it's all around business rules. There is no easier use case to automate with Agentic AI," said Jaap van Riel, CTO of KnitWell "I really hope I never have to upgrade an ERP in my life again. It's not good for my sleep, and it's not good for kind of the P&L either."

Now this phase out of ERP software as the work interface will take time. Ravin said the reality is that transitions in the enterprise must be orderly and there are processes to consider.

For enterprises to find money for innovation, they'll need to run those ERP processes well. However, don't confuse ERP processes for ERP software.

"We believe we could take 40% of the labor cost out of running the processes that run a business or government agency. That is monumental in terms of driving bottom line profits, streamlining operations and leapfrogging over the competitors with technology. This is what AI can do in the real world, in ERP and transactions," said Ravin.

Ravin's pitch is something we're hearing anecdotally from customers and vendors. The reality is that AI is changing the enterprise technology cycle to one that revolves around more efficient processes and use cases. The software matters, but optimized processes matter more.

The game will revolve around abstracting ERP software away so you can focus on the process.

The long AI game (sped up)

CIOs will need to realize it's a long game. AI agents will require governance, protocols and orchestration. Processes will be retooled.

"There are a lot of questions. We still have a lot of things to do. This is not all baked yet," said Ravin. "You have to recognize we're in the middle part of this inning in getting the technology figured out and then deployed in the real world. That's why we keep focusing on process, not the software."

To Ravin, enterprises will need to focus on process over software because "we don't have years to change."

Enterprises will need to think through the 10 processes that run businesses. Rimini Street's plan is to help enterprises extract the processes out of the ERP software.

"We are going to extract these processes out of the ERP software and move them up into the Agentic AI ERP like it's surgery. And eventually, eventually, there won't be a need for the underlying software anymore. Because we will have moved piece by piece over time, from one paradigm of technology to another," said Ravin. "Their tools, our know-how, our knowledge, our ability to go to market with credibility because we know these processes. We'll just put the new technology right over the top. We won't take the risk of ripping and replacing your massive global system. We then will move pieces one at a time."

Rimini Street's portfolio includes methodology called Smart Path, which revolves around a gradual move to a process-driven system and out of the ERP software game.

Vijay Kumar, Chief Innovation Officer at Rimini Street, Smart Path revolves around providing a foundation and roadmap to move to agentic AI ERP. Core tenents include:

  • Use legacy ERP systems for what they're good at: Data.
  • Layer a framework on top of it with Agentic UX.
  • Leverage an architecture that is headless. "We're going to keep the SAP systems the way they are. We're going to preserve the data, the customizations and a lot of the work that customers have done," said Kumar. "What we're doing is really modernizing the front end of it, adding AI agents, which is absolutely critical, improving the UX, and finally being able to automate."
  • Be prepared to evolve architecture since it's early in the AI game.

Rimini Street reckons it can get enterprises to a pilot in 30 days. Kumar said the ideal customer is one that has complex workflows, manual workflows and things that are hard to automate. The end state may feature agentic AI ERP apps to handle processes and use cases. "Once we start building credibility app by app, use case by use case, we're going to layer agentic AI across the enterprise," said Kumar.

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity rimini street ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

From Drift to Discipline: How AI-Powered Leaders Hone Strategy & Reinvent Enterprise Software | DisrupTV Ep. 421

From Drift to Discipline: How AI-Powered Leaders Hone Strategy & Reinvent Enterprise Software | DisrupTV Ep. 421

How AI-Powered ERP and Human-Centric Leadership Are Reshaping Enterprise Software: Insights from DisrupTV

Today’s DisrupTV episode brought together two powerhouse voices shaping the future of enterprise innovation: Simon Paris, CEO of Unit4, and Geoff Tuff, transformation strategist and co-author of Hone. Their conversation dug into one of the most urgent questions companies face today: How do leaders adapt to exponential change while staying human-centric in an AI-driven world?

From AI-powered ERP systems to continuous improvement as an alternative to large-scale transformation, the discussion offered actionable insights for CEOs, CIOs, and forward-thinking leaders navigating the next era of work.

The Guests: Leaders Shaping the Future of Work

Simon Paris joined the show from London, bringing his expertise as CEO of Unit4, a global enterprise software company known for its AI-enhanced ERP, HCM, and financial planning solutions built for people-centric industries.

Geoff Tuff—author, strategist, and co-creator of the book trilogy culminating in Hone—offered a fresh lens on how leaders can adapt to accelerating change. His work focuses on helping executives drive meaningful progress without relying on outdated models of transformation.

Unit4’s Vision: AI That Gives People Time Back

Paris reflected on his transition from Finastra to Unit4, highlighting a shared purpose across both organizations: making work more meaningful by giving people time back.

Unit4’s mission is centered on enabling educators, civil servants, and service professionals to focus on what matters—not administrative tasks. With rapid expansion across North America and Asia, the company is doubling down on AI-powered capabilities that make enterprise systems proactive rather than reactive.

A standout concept from the episode was self-driving ERP—software that predicts needs, automates decisions, and engages employees through natural, conversational interfaces.

Ava: The Conversational AI Assistant From Unit4

Paris introduced Ava, Unit4’s advanced virtual assistant designed to streamline everyday workflows. Ava doesn't just respond to commands—it orchestrates decisions, automates routine tasks, and learns from context to support employees at every level.

This conversational AI approach aims to:

  • Reduce administrative burden
  • Improve decision-making
  • Increase employee engagement
  • Make enterprise systems intuitive across generations

Paris emphasized that successful AI adoption requires continuous experimentation and learning, not one-time deployments.

Leadership, Meaningful Work & Customer Obsession

Paris highlighted Unit4’s leadership culture, grounded in servant leadership, customer obsession, and meaningful work. The company actively recruits talent motivated by purpose-driven service.

A crucial insight:

  • Leaders must create safe spaces for experimentation and learning from failure.
    This environment is essential for organizations adopting AI in a human-centric way.

Geoff Tuff: Why “Hone,” Not Transformation, Is the Future

Tuff introduced the concept of hone—a continuous improvement model designed for a world where change is exponential.

Unlike traditional transformation, hone emphasizes:

  • Small, continuous adjustments
  • Built-in adaptability
  • Frequent hypothesis testing
  • Systems that evolve as quickly as market conditions

His message was clear: Continuous improvement outperforms one-time transformations in a world defined by constant acceleration.

Management Systems Drive Human Behavior

Both guests emphasized that management systems—not strategy decks—shape real behavior inside an organization.

Tuff urged CEOs to think of themselves as chief system designers, responsible for:

  • How decisions flow
  • Which behaviors are incentivized
  • How teams adapt
  • What data guides execution

Paris added that understanding human behavior within these systems is just as important as the technology that powers them.

Practical Steps CEOs Can Implement Today

Tuff outlined a set of tactical steps leaders can use now:

  • Identify the behaviors required to win
  • Examine whether existing systems reinforce or hinder those behaviors
  • Make minimally viable adjustments and test their impact
  • Stay close to teams and customer-facing decisions
  • Treat continuous experimentation as a core leadership habit

These moves help organizations build adaptability without disruption.

Key Takeaways

  • AI-powered ERP is shifting from reactive systems to self-driving, predictive platforms.
  • Human-centric design remains essential, especially for industries where people—not processes—are the core.
  • Conversational AI (like Unit4’s Ava) is reshaping how employees interact with enterprise software.
  • Continuous improvement (“hone”) is more effective than traditional transformation in an era of exponential change.
  • Management systems drive behavior, and CEOs must actively design, refine, and realign them.
  • Leaders who create safe spaces for experimentation will accelerate AI adoption and organizational learning.

Final Thoughts

The future of enterprise software isn’t just about automation or AI—it’s about empowering people, improving decision-making, and redesigning systems to adapt continuously. As Simon Paris and Geoff Tuff made clear, organizations that pair human-centric leadership with AI-driven innovation will be the ones best positioned to thrive.

With AI accelerating every aspect of work, leaders must learn to hone—realign, adjust, and evolve—rather than rely on large-scale transformation efforts that can’t keep pace. The evolution of ERP, HCM, and management systems is already underway, and those who lean in will define the next decade of enterprise innovation.

Stay tuned: next week R "Ray Wang and Vala Afshar will unveil the Top 25 Books of 2025—a can’t-miss list for leaders looking to stay ahead.

Related Episodes

If you found Episode 421 valuable, here are a few others that align in theme or extend similar conversations:

 

New C-Suite Future of Work Tech Optimization Chief Executive Officer Chief People Officer Chief Information Officer Chief Data Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/gObE585OVgk?si=drkDBoaL_sU1Ybym" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

TCS Acquires Coastal Cloud: Filling a Critical Gap for Salesforce’s Agentic Future

TCS Acquires Coastal Cloud: Filling a Critical Gap for Salesforce’s Agentic Future

TCS has announced the acquisition of Coastal Cloud, a leading US-based Salesforce Summit Partner, in a $700 million all-cash transaction. The deal moves TCS into the top tier of Salesforce advisory and consulting firms globally and strengthens its ability to deliver AI-first, agent-driven transformation programs. Coastal Cloud brings Salesforce-native advisory depth, strong mid-market relationships, and close alignment with Salesforce product leadership through its role on the Salesforce Partner Advisory Board.

What Salesforce buyers are increasingly looking for

In conversations with enterprise buyers, the focus has shifted beyond implementation capacity. Salesforce customers are looking for partners that can connect platform decisions to business outcomes, design operating models around AI and agents, and scale these programs across regions and business units. As Salesforce advances Agentforce 360, buyers consistently point to the need for help with data readiness, governance, integration, and continuous optimization. This has widened the gap between boutique Salesforce specialists with deep platform expertise and large GSIs that bring scale but have often lacked senior Salesforce advisory leadership.

How this acquisition fills a gap for TCS customers

This is where the Coastal Cloud acquisition matters for TCS. In buyer discussions, TCS has been viewed as strong in enterprise scale, industry context, and global delivery, but Salesforce programs often started deeper in execution rather than advisory. Coastal Cloud adds that missing front-end capability. For TCS customers, Salesforce engagements can now begin with Salesforce-native business and industry advisory and then scale globally with consistent delivery, AI engineering, and governance. This becomes increasingly important as Salesforce programs shift from CRM optimization to agent-driven, cross-functional transformation.

Why this matters for Coastal Cloud customers

From a buyer perspective, Coastal Cloud customers have historically valued deep Salesforce expertise and close partnership. However, in conversations about scaling, global rollout, and integration with enterprise platforms, limitations often emerged. With TCS, these customers gain access to global delivery, vertical accelerators, and enterprise-grade AI capabilities, while retaining Salesforce depth and continuity. This is particularly relevant as Agentforce programs extend across sales, service, marketing, and revenue operations.

[Source: Salesforce]

Agentforce 360, GSIs, and the competitive landscape

Agentforce 360 signals a shift toward agents operating across business workflows, not just automating tasks. In buyer conversations, it is clear that delivering these programs requires process redesign, data unification, security, governance, and operational ownership at scale. This favors GSIs. Accenture and Deloitte have long paired Salesforce depth with strong business consulting. Cognizant and Infosys have invested heavily in Salesforce delivery and platform skills but are often perceived as more execution-led. Coastal Cloud gives TCS a clearer path to compete across this spectrum by strengthening Salesforce-native advisory leadership alongside its global delivery engine. The differentiator, as buyers note, will be who can operationalize agents reliably across the enterprise, not who can deploy them fastest.

What buyers should ask now

  • Does my Salesforce partner combine Salesforce-native advisory depth with global delivery scale for agent-driven programs?
  • How will Agentforce agents be governed, monitored, and evolved across regions and business units?
  • Can Salesforce agents be integrated with enterprise data, security, and non-Salesforce systems?
  • What industry-specific use cases and accelerators exist beyond generic Agentforce demonstrations?
     

Closing perspective

This acquisition reflects a clear market signal emerging in buyer conversations. Enterprises want fewer handoffs, stronger advisory up front, and partners that can carry agentic programs from design through sustained execution. With Coastal Cloud, TCS is closing a meaningful capability gap and positioning itself more directly for the next phase of Salesforce-led, agent-driven enterprise transformation.

Tech Optimization Innovation & Product-led Growth salesforce Chief Executive Officer Chief Information Officer Chief Product Officer

Broadcom CEO comments highlight build vs. buy AI debate

Broadcom CEO comments highlight build vs. buy AI debate

The companies that are looking to leverage artificial intelligence for competitive advantage are increasingly choosing to go custom. It's build over buy at massive scale.

Broadcom's fourth quarter earnings results were an eye opener for the industry after CEO Hock Tan laid out a few interesting tidbits. Broadcom is benefiting from XPUs, custom AI accelerators. Google's TPUs, which have emerged as a threat to Nvidia, account for a big chunk of Broadcom's revenue.

Tan also revealed that Anthropic is buying Google Ironwood TPUs, the latest generation. Some choice quotes:

  • "Our custom accelerated business more than doubled year-over-year, as we see our customers increase adoption of XPUs, as we call those custom accelerators in training their LLM and monetizing their platforms through inferencing APIs and applications."
  • "These XPUs, I may add, are not only being used to train and inference internal workloads by our customers, the same XPUs in some situations have been extended externally to other LLM peers, best exemplified at Google, where the TPUs use in creating Gemini, have also been used for AI cloud computing by Apple, Coherent and SSI as an example."
  • "Last quarter, Q3 '25, we received a $10 billion order to sell the latest TPU Ironwood racks to Anthropic. And this was our fourth customer that we mentioned. And in this quarter Q4, we received an additional $11 billion order from the same customer for delivery in late 2026."
  • "That does not mean our other two customers are using TPUs. In fact, they prefer to control their own destiny by continuing to drive their multiyear journey to create their own custom AI accelerators or XPU racks, as we call them. And I'm pleased today to report that during this quarter, we acquired a fifth XPU customer through a $1 billion order placed for delivery in late 2026."

The big takeaway is that custom is the thing right now. For AI workloads at scale, this build over buy conclusion isn't that surprising. Google's TPUs are gaining favor. AWS launched its Trainium 3 processor and outlined Trainium 4. These hyperscalers are going custom to optimize for costs and monetize as soon as they stand up data centers.

Tan said customers are choosing to go custom for multiple reasons, but price-performance is the big one. Rivian also noted that agility is a big factor. Rivian's custom AI processor enables it to get started on software well before the chip lands.

The move toward custom components for AI systems is notable, but the market is immature. When markets are young, you tend to build your own stuff. Ask Amazon and Google. The big question is whether this custom-all-the-time approach lasts. Tan provided a bit of history when asked about the future of the XPU.

He said:

"You is don't follow what you hear out there as gospel. It's a trajectory. It's a multiyear journey. And many of the players, and not too many players, doing LLM wants to do their own custom AI accelerator for very good reasons. You can put in hardware if you use a general purpose GPU, you can only do in software and kernels and software. You can achieve performance-wise so much better in the custom purpose-designed, hardware-driven XPU."

Will that mean custom approaches will be dominant over time? Not at all. Tan said:

"Will that mean that over time, they all want to go do it themselves? Not necessarily. And in fact, technology in silicon keeps updating, keeps evolving. And if you are an LLM player, where do you put your resources in order to compete in this space, especially when you have to compete at the end of the day against merchant GPUs who are not slowing down in the rate of evolution. I see that as this concept of customer tooling is an overblown hypothesis, which frankly, I don't think will happen."

These comments are notable if you expand it to broader enterprises. My take:

  • Buy over build makes a lot of sense right now for enterprises, not necessarily at the hardware stack. If you can use AI to code and transform it's possible that you don't need to pay your SaaS tax. As for hardware, you’ll consume custom compute from cloud providers.
  • Agentic AI interfaces could relegate a lot of your applications to plumbing. See: The enterprise LLM questions you should be asking | Agentic AI: Is it really just about UX disruption for now?
  • OpenAI and Anthropic see this trend and are increasingly tapping into enterprise processes. See: AI agents, automation, process mining starting to converge
  • Vendors will tell you repeatedly that building your own systems is a fool's errand, but if the focus is on process the strategy makes sense. However, Tan noted repeatedly that the custom route is a multiyear journey. The same multiyear approach matters for software too.
  • In the end, enterprises want to control their own destinies and be agile. Locking in to any one vendor means you have no leverage. This fact applies to your data layer too and vendors like Databricks and Snowflake. See: AI strategies and projects: The hope, the fear and everything in between
  • Enterprises are likely to think about custom apps because they're tired of SaaS costs rising as much as health care costs. Perhaps the suite always wins, but that phase in the AI app market may not arrive for years.

Related:

 

Data to Decisions Tech Optimization Chief Information Officer

Veeam and Securiti: Data Trust Redefines Security Strategy

Veeam and Securiti: Data Trust Redefines Security Strategy

Veeam completed the acquisition of Securiti today, a move that reflects how customer expectations are changing as AI becomes embedded across enterprise workflows.

For a long time, enterprises approached data protection and security through an operational lens. Backups focused on recovery. Security tools focused on infrastructure and access. Governance lived in a separate world, often driven by compliance teams. Those boundaries are now breaking down, and data itself is moving to the center of security decision-making.

AI is the catalyst.

AI changed how data behaves, and that changed what security teams need

AI has turned previously dormant data into active fuel. Unstructured documents, logs, recordings, and historical files are now being indexed, summarized, embedded, and reused across copilots and agent-driven workflows. Data is no longer static or slow moving. It is accessed, transformed, and recombined at machine speed.

That shift exposes a problem many organizations have lived with for years but could afford to ignore. Most enterprises do not have a consistent, up-to-date understanding of what data they have, where it lives, who can access it, and what risk it carries.

In AI-driven environments, that gap moves beyond governance and becomes a delivery issue. Security, privacy, and risk teams increasingly slow or pause AI initiatives because they cannot establish trust in the data supply chain quickly enough.

[Source: Veeam]

Why data awareness is moving closer to the core platform

This is where capabilities such as data discovery, classification, and risk context start to matter more. Often described as data security posture management (DSPM), these capabilities help organizations continuously understand sensitive data across structured and unstructured environments and apply policy-driven controls.

What is changing is the role these capabilities play. Data awareness is becoming foundational to how security, governance, and AI programs operate, rather than something added later.

Securiti’s role in this shift reflects what buyers are looking for: persistent visibility into data, contextual understanding of sensitivity and risk, and the ability to apply consistent policies as data moves and is reused. As AI usage expands, that visibility becomes essential.

From “can we recover” to “can we recover and trust what we restored”

Another shift I see in buyer conversations is a change in how recovery success is defined.

Restoring systems quickly is no longer sufficient. Teams want confidence that restored data is clean, compliant, and safe to reuse. In AI-driven environments, restored data is often reintroduced into analytics, search, or downstream AI workflows, which amplifies any underlying data issues.

Deeper data understanding increasingly influences operational outcomes. Knowing what data is sensitive, what data was impacted, and what data should be prioritized or restricted now carries as much weight as the mechanics of recovery.

What this means for enterprise buyers

The broader takeaway from this acquisition goes beyond one vendor’s roadmap and points to how enterprise buying criteria are evolving.

Buyers are increasingly looking for platforms that:

  • Provide continuous visibility into sensitive data across environments
  • Apply consistent policies to data, regardless of where it resides or how it is accessed
  • Support AI use cases without introducing unmanaged data risk
  • Connect data understanding to real operational actions, including recovery and reuse

This does not imply that every organization needs a single, monolithic platform. It does suggest that fragmented approaches, where data insight, security controls, and operational processes remain siloed, are becoming harder to sustain.

Where this leaves security leaders

Veeam’s acquisition of Securiti reflects a broader market reality. AI has shifted the center of gravity in security from systems to data. As data becomes more fluid, more valuable, and more exposed, enterprises need stronger, more integrated ways to understand and control it.

Data discovery and classification may not be the most visible parts of an AI strategy, but they are quickly becoming some of the most consequential. Security, governance, and recovery all now converge on a single prerequisite.

Do you actually trust your data?

Digital Safety, Privacy & Cybersecurity I am Team Leader at the Nominee Organization (no vendor self nominations) Distillation Aftershots Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Rivian’s AI strategy: Four takeaways

Rivian’s AI strategy: Four takeaways

Rivian's vertically integrated approach to autonomous driving and AI is enabled by its data flywheel that it uses to train its models and optimize.

The automaker held its first Autonomy & AI Day and perhaps the biggest lesson is that Rivian is an example of a company using its first-party data to develop new opportunities.

Rivian CEO RJ Scaringe highlighted the company's strategy, which revolves around owning its AI stack. That stack includes purpose-built silicon and a platform that used ingest data and train models. Rivian is looking to get into the AI and autonomy game, which includes the likes of Tesla as well as Alphabet unit Waymo.

"Directly controlling our network architecture and our software platforms in our vehicles has, of course, created an opportunity for us to deliver amazingly rich software. But perhaps even more importantly, this is the foundation of enabling AI across our vehicles and our business," said Scaringe.

Key news items from Rivian's investor meeting:

  • Rivian unveiled its Rivian Autonomy Processor (RAP1), a custom 5nm processor that integrates processing and memory on a single multi-chip module.
  • RAP1 features RivLink, which is a low latency interconnect technology that networks chips for more processing power.
  • The company outlined its third-gen Autonomy computer, or Autonomy Compute Module 3 (ACM3). ACM3 can process 5 billion pixels per second.
  • Rivian has an in-house developed AI compiler and platform. The platform, the Rivian Autonomy Platform, features an end-to-end data loop and its Large Driving Model (LDM), which is an LLM for driving. The LDM will distill strategies from Rivian's datasets.

Going forward, Rivian plans to integrate LiDAR into its upcoming R2 models at the end of 2026. LiDAR augments Rivian's multi-sensor strategy. Rivian also said it will add Universal Hands-Free driving features to its second-gen R1 vehicles. The system will be available on 3.5 million miles of roads in the US and Canada.

Rivian's AI strategy beyond autonomy includes Rivian Unified Intelligence, a foundation of multi-modal and multi-LLMs and data. The platform is designed to enable Rivian to roll out features, improve service and offer predictive maintenance. Rivian is also launching a next-gen voice interface in early 2026 that uses its edge models, third party integrations, and reasoning LLMs.

Beyond the news barrage from Rivian, there are multiple takeaways from the company's strategy meeting. Here are a few:

Rate of change only increasing. Rivian has created an architecture that can adapt to the pace of change. Enterprises will need to work under the assumption that the rate of change over the next five years will be much faster than the last five years.

"If we look forward 3 or 4 years into the future, the rate of change is an order of magnitude greater than what we've experienced in the last 3 or 4 years," said Scaringe.

Also: Uber outlines its autonomous vehicle plan | GM to integrate Google Gemini, delivered unified software defined vehicle architecture

First party data is everything. "Our approach to building self-driving is really designed around this data flywheel. We're a deployed fleet, has a carefully designed data policy that allows us to identify important and interesting events that we can use to train our large model offline, before distilling the model back down into the vehicle," said Scaringe.

AI will touch every process. Rivian is leveraging its AI backbone for its vehicles and autonomous efforts. But Rivian's AI backbone also runs through the enterprise. Scaringe said its AI strategy will impact its sales and service model, supply chain and manufacturing infrastructure.

You may need to build your own. Vidya Rajagopalan, Senior Vice President of Electrical Hardware at Rivian, explained why the company had to develop its own processors. She said:

"It's important to address why we chose to build in-house silicon. The reason for doing it is velocity, performance and cost.

With our in-house silicon development, we're able to start our software development almost a year ahead of what we can do with supplier silicon. We actually had software running on our in-house hardware prototyping platform well ahead of getting first silicon. Our hardware and software teams are actually co-located and they're able to develop at a rapid pace that is just simply not possible with supplier silicon."

Rajagopalan said the ability to customize is also critical for designing for current use cases and the future. In addition, Rivian can optimize to save money.

Think multiple models. Wassym Bensaid, Chief Software Officer at Rivian, said the company has developed its own model for driving, but has a "suite of specialized agents."

"Every Rivian system from manufacturing, diagnostics, EVR planning, navigation becomes an intelligent node through MCP. And the beauty here is we can integrate third-party agents. And this is completely redefining how apps in the future will integrate in our cars," said Bensaid. "We orchestrate multiple foundation models in real time, choosing the right model for each task. And we support memory and context, allowing us to offer advanced levels of personalized experience."

Bensaid said the use of multiple models and Rivian's architecture is designed to move workloads from the cloud to the edge. Rivian Unified Intelligence is the connective tissue.

Data to Decisions Next-Generation Customer Experience Chief Information Officer

Custom AI processors mean Broadcom printed money in Q4

Custom AI processors mean Broadcom printed money in Q4

Broadcom reported better-than-expected fourth quarter results as it continued to see a revenue surge due to custom AI chips.

The company reported fourth quarter net income of $8.52 billion, or $1.74 a share, on revenue of $18.01 billion, up 28% from a year ago. Non-GAAP earnings for the fourth quarter were $1.95 a share.

Wall Street was expecting non-GAAP earnings in the fourth quarter of $1.86 a share on revenue of $17.49 billion.

As for the outlook, Broadcom projected first quarter revenue of $19.1 billion, up 28% from a year ago.

Broadcom, despite acquiring VMware to beef up its software business, is still a hardware story. CEO Hock Tan said revenue growth was "driven primarily by AI semiconductor revenue increasing 74% year-over-year." Broadcom makes chips for Google and inked a deal for custom processors for OpenAI. 

Tan added that it expects momentum to continue in the fourth quarter driven by demand for custom AI accelerators and Ethernet AI switches.

In the fourth quarter, Broadcom's semiconductor business was 61% of sales and infrastructure software was 39%. Chip revenue was up 35% in the quarter and software was up 19%.

As a result, Broadcom is just printing money. Cash flow from operations in the fourth quarter was $7.7 billion, up 37% from a year ago. Free cash flow was up 36%. Broadcom's cash and cash equivalents checked in at $16.18 billion, up from $10.72 billion in the previous quarter.

For fiscal 2025, Broadcom reported net income of $23.13 billion, or $4.77 a share, on revenue of $63.89 billion, up 24% from fiscal 2024. 

Tan said on the earnings call:

  • "Our custom accelerated business more than doubled year-over-year, as we see our customers increase adoption of XPUs, as we call those custom accelerators in training their LLM and monetizing their platforms through inferencing APIs and applications."
  • "These XPUs, I may add, are not only being used to train and inference internal workloads by our customers, the same XPUs in some situations have been extended externally to other LLM peers, best exemplified at Google, where the TPUs use in creating Gemini, have also been used for AI cloud computing by Apple, Coherent and SSI as an example."
  • "Last quarter, Q3 '25, we received a $10 billion order to sell the latest TPU Ironwood racks to Anthropic. And this was our fourth customer that we mentioned. And in this quarter Q4, we received an additional $11 billion order from the same customer for delivery in late 2026."
  • "That does not mean our other two customers are using TPUs. In fact, they prefer to control their own destiny by continuing to drive their multiyear journey to create their own custom AI accelerators or XPU racks, as we call them. And I'm pleased today to report that during this quarter, we acquired a fifth XPU customer through a $1 billion order placed for delivery in late 2026."

Tech Optimization Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer