Results

IBM outlines quantum computing roadmap through 2029, fault-tolerant systems

IBM outlines quantum computing roadmap through 2029, fault-tolerant systems

IBM updated its quantum computing roadmap heading into IBM Quantum Starling, a large-scale fault-tolerant quantum system in 2029.

Big Blue said IBM Quantum Starling will be delivered by 2029 and installed at the IBM Quantum Data Center in Poughkeepsie, New York. That system is expected to perform 20,000 times ore operations than today's quantum computers.

For IBM, Quantum Starling will be the headliner of a fleet of quantum computing systems. IBM CEO Arvind Krishna said the company is leaning into its R&D to scale out quantum computing for multiple use cases including drug development, materials discovery, chemistry, and optimization. IBM also recently outlined flexible pricing models for quantum computing to expand usage and upgraded its Quantum Data Center to its latest Heron quantum processor.

The news lands as quantum computing players outline plans to scale organically or via acquisition. IonQ just announced its plans through 2030 and quantum computing vendors have been laying out plans throughout 2025.

IBM said Starling will be able to run 100 million quantum operations using 200 logical qubits. A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit’s worth of quantum information. Quantum computers need to be error corrected to run large workloads without fault.

Starling will also be a foundation system for IBM Quantum Blue Jay, which will be able to run 1 billion quantum operations over 2,000 logical qubits.

To get to fault tolerant scale, IBM is building an architecture that is fault tolerant, able to prepare and measure logical qubits, apply universal instructions and decode measurements from logical qubits in real time. This architecture, which was outlined in two research papers, also has to be modular and energy efficient.

Here's how IBM is going to get to Starling and beyond:

  • 2025: IBM Quantum Loon will launch to test architecture components for quantum low-density parity check (qLDPC) codes, which reduce the number of physical qubits needed for error correction and cuts overhead by about 90%.
  • 2026: IBM Quantum Kookaburra will feature a modular processor to store and process encoded information and combine quantum memory and logic operations.
  • 2027: IBM Quantum Cockatoo, will feature two Kookaburra modules that will link quantum chips together like nodes in a larger system.

Holger Mueller, an analyst at Constellation Research, said:

"Sometime in the last 6 months quantum vendors realized that they will not be able to produce enough qubits for real world use cases and are focusing on error correction. What is unique with IBM is that it's modular approach has led to the realization that there are challenges to overcome when putting multiple quantum computers together, hence a roadmap change and a focus on qLDPC based couplers, with 'Loon', coming this year. Next year will then be the showcase and proof point that all of this works with IBM Quantum Kookaburra. Kudos go to IBM for laying out its roadmap further, all the way to its Starling system, allowing CxOs to align their quantum uptake plans."

More IBM Quantum:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth IBM Quantum Computing Chief Information Officer

IonQ acquires Oxford Ionics for $1.07 billion, gets quantum-on-a-chip technology

IonQ acquires Oxford Ionics for $1.07 billion, gets quantum-on-a-chip technology

IonQ said it will acquire UK's Oxford Ionics in a deal valued at $1.075 billion in mostly stock and $10 million in cash. The deal is designed to accelerate IonQ's quantum computing roadmap and establish a global hub for research and development.

The purchase is IonQ's largest to date. Niccolo de Masi, IonQ CEO, said in a statement that the Oxford Ionics purchase will " set a new standard within quantum computing and deliver superior value for our customers through market-leading enterprise applications."

According to IonQ, the Oxford Ionics will bring complementary technology to the company. IonQ focuses on trapped ion systems and Oxford Ionics holds world records in fidelity, which measures the accuracy of quantum applications. The game plan for the combined company is to provide an integrated quantum computing stack that features IonQ quantum computing, applications and networking with Oxford Ionics ion-trap technology, which is manufactured on standard semiconductors.

Oxford Ionics ability to bring ion-trap-on-a-chip to IonQ and enable the combined company to "accelerate IonQ’s commercial quantum computer miniaturization and global delivery." Oxford Ionics founders, Dr. Chris Ballance and Dr. Tom Harty, are expected to remain with IonQ after the acquisition is completed.

In an SEC filing, IonQ said it will issue up to about 35 million new shares to pay for the deal. The company said:

"The number of shares of Common Stock to be issued will not be less than 21,143,538 or more than 35,241,561. The final number of shares of Common Stock to be issued as Transaction Consideration will be calculated using the volume-weighted average price for shares of Common Stock for the 20 trading days immediately preceding, but not including, the third business day prior to the date of the Closing, but will not be more than $50.37 per share or less than $30.22 per share."

Ballance said Oxford Ionics' quantum chip can be manufactured in standard semiconductor fabs. "We look forward to integrating this innovative technology to help accelerate IonQ’s quantum computing roadmap for customers in Europe and worldwide," said Ballance.

IonQ has been on an acquisition spree of late with an emphasis on quantum networking. The acquisition of Oxford Ionic is designed to bring scale to compute and use cases in materials science, drug discovery, logistics, financial modeling and defense.

Here's a look at IonQ's acquisitions.

IonQ has also been expanding its global footprint and Oxford Ionics will be a beachhead in the UK.

Oxford Ionics outlined its roadmap last month with a plan that features enterprise grade quantum computing by 2027. IonQ's roadmap features a similar timeline and will bring an established customer base and sales team to better commercialize Oxford Ionics.

In a statement, IonQ said the combined company plans to build systems with 256 physical qubits with 99.99% accuracy by 2026 and scale to 10,000 physical qubits with logical accuracy of 99.99999% by 2027. Ultimately, IonQ wants to hit 2 million physical qubits in quantum computing by 2030.

IonQ and Oxford Ionics held a technology overview call featuring Ballance and Dr. Mihir Bhaskar, CEO of Lightsynq, which was acquired. De Masi touted a recent use case with Astra Zenica, IonQ, Nvidia and AWS and added that Oxford Ionics will accelerate commercial usage.

"IonQ, Lightsynq and Oxford Ionics will create the winning quantum computer in each year and every era of quantum computing," said de Masi.

Dean Kassmann, SVP of engineering and technology at IonQ, said the company's latest acquisitions will "represent a significant acceleration of our planned development work to realize our vision to build the world's best quantum computers to solve the world's most impactful and complex problems."

Kassmann also outlined IonQ's roadmap with Oxford Ionics in the fold.

Ballance said Oxford Ionics approach to leveraging existing technologies to go along with quantum computing will be scalable. Ballance said:

"We have a clear path to apply this to systems with 10s of 1000s of qubits in a single chip, and we've been to be working on our 256 qubit quantum processor units. Our technology allows us to scale devices to millions of qubits by building bigger and bigger chips. But what's more, these chips can be networked by photonic interconnects to allow for distributed compute."

Constellation Research analyst Holger Mueller said:

"The road to commercial quantum uses cases goes this way: (a) more qubits, (b) better error correction or (c) a combo of both. For a decade the industry was squarely rooted in more qubits. More recently it's about error correction, which means that vendors think they have sufficient qubits. IonQ is the perfect example of saying it's a combo of both to enable more sophisticated quantum use cases."

Data to Decisions Tech Optimization Innovation & Product-led Growth Quantum Computing Chief Information Officer

Apple's WWDC 2025: Apple Intelligence leaves a void as execs go redesign happy

Apple's WWDC 2025: Apple Intelligence leaves a void as execs go redesign happy

Apple executives acknowledged at the company's Worldwide Developer Conference (WWDC) that Apple needs more time to make Apple Intelligence work well. In the meantime, Apple executives outlined Liquid Glass, a redesign that'll flow through Apple devices.

The company also announced new naming conventions for iOS, watchOS, tvOS, macOS, visionOS and iPadOS.

If anything, Apple's developer keynote highlighted how Apple Intelligence, outlined in 2024 with great fanfare, has fallen short of expectations. Craig Federighi, SVP of Software Engineering at Apple, noted how Apple Intelligence did ship features including email and notification summarization, notes, smart replies and ways to clean up video and photos.

Federighi said:

"We delivered this while taking an extraordinary step forward for privacy and AI with private cloud compute, which extends the privacy of your iPhone into the cloud so no one else can access your data, not even Apple. We also introduced enhancements that make Siri more natural and more helpful, and as we've shared, we're continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year."

He said that Apple Intelligence will get more languages, but the approach is incremental for now. "We're making the generative models that power Apple intelligence more capable and more efficient, and we're continuing to tap into Apple intelligence in more places across our ecosystem. Throughout today's presentation, you'll see new Apple intelligence features that elevate your experiences across iPhone, Apple, watch Apple vision pro Mac and iPad. Plus, this year, we're doing something new, and we think it's going to be pretty big. We're opening up access for any app to tap directly into the on device," said Federighi.

As for LLMs, Apple Intelligence will have a new foundation model framework that gives developers direct access with privacy and offline access built in.

"We think this will ignite a whole new wave of intelligent experiences in the apps you use every day. For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," said Federighi. "And because it uses on device models this happens without cloud API costs. We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

With that Apple Intelligence tease--that may not be reality until 2026--Apple moved on to other key items including a redesign for iOS that already has Windows Vista trending due to the resemblance

A few thoughts:

  • Apple Intelligence appears to be banking on local LLMs but given the rate of innovation that could be a mistake. Developers will need to figure out how much they can differentiate with Apple’s on-device model.
  • Key details about the Apple Intelligence's approach for developers is lacking.
  • Apple is clearly betting on privacy as a pitch for AI, but it's unclear whether focusing on its edge devices will keep pace with OpenAI, Google, Anthropic and Microsoft to name a few.
  • Apple does appear to be integrating OpenAI's ChatGPT across its applications. For instance, Apple said ChatGPT image generation is now available in Image Playground.
  • It's quite possible that Apple will have to buy its way out of this AI pickle, but historically the company hasn't made big acquisitions. The valuations are stretched for foundational model players.
  • The risk here is that Apple devices are merely a vessel for other companies' AI no matter how pretty the operating systems become.

Other news items:

  • Apple's WWDC keynote was devoted to the redesign and the changes across devices looked strong. However, much of what Apple is proposing is in the latest Android today. See: Apple release on redesign.
  • Developers are getting new APIs for location, enhancements to notifications in Apple Watch, App Intents, and other goodies.
  • Visual Intelligence will pull up context across iPhone apps to give you more information, rating and other data.

 

Data to Decisions Future of Work Next-Generation Customer Experience apple Chief Information Officer

Snowflake Summit 2025: The AI-Native Data Foundation Gets Real

Snowflake Summit 2025: The AI-Native Data Foundation Gets Real

Last year, Snowflake made a bold pitch for an "AI Data Cloud." This year, they got down to business.

??"We want to empower every enterprise on the planet to achieve its full potential through data and AI. And AI, and we think this moment makes it possible more than any such moment we have seen in a decade or even two decades." - Sridhar Ramaswamy, Summit 2025

CEO Sridhar Ramaswamy's message was clear: the age of AI-native data isn't coming—it's here. And with it, enterprise leaders no longer need to stitch together data lakes, BI tools, and AI pipelines themselves. Snowflake's platform is evolving to make data AI-ready from the moment it's created—and to empower business and technical users alike to act on it, securely and at scale.

While less flashy than 2024's announcements, this year's updates show Snowflake delivering the operational muscle to begin making Sridhar's vision a reality: faster performance, real integration, and tangible outcomes that will make or break AI investments.

1. AI-Native Analytics, Operationalized

Delivering decision intelligence and automation at scale, turning data into action through governed, AI-powered workflows accessible to analysts and agents alike.

What's new:

  • Cortex AISQL: AI callable in SQL—functions like AI_FILTER, AI_AGG, and AI_EXTRACT add the ability to handle unstructured text, images, and audio atop structured queries.
  • Snowflake Intelligence: Natural language agents to access structured/unstructured data.
  • Semantic Views: Define business logic/ metrics in a reusable, governed layer—so SQL users, dashboards, and AI agents all use consistent logic.
  • AI Observability: Trace AI behavior, audit responses, validate, and cite model outputs.

Why it matters:

  • BI-to-AI leap: Analysts get access to advanced models without leaving SQL. It flattens the skill gap and speeds delivery.
  • Shifts BI from dashboards to conversations with your data, giving business functions an AI assistant to explore your data in Snowflake.
  • Trust first: From semantic models to cited answers, Snowflake treats transparency and observability as part of the core Snowflake architecture.
  • Raises the question already being asked around BI/analytic investments: how can we empower analysts and decision-makers to safely leverage AI without introducing new silos or trust gaps?

2. Accelerated Data Modernization & Engineering

One of the top initiatives shared by customers and partners at the Summit was modernization — migrating and integrating diverse data sources, including structured, unstructured, and real-time data, into a single, governed platform. Snowflake listened.

What's new:

  • OpenFlow (GA): Enterprise-grade, NiFi-based integration engine—handles batch, real-time, and unstructured ingestion.
  • SnowConvert AI: Uses AI to automate code conversion, validation, and migration workflows from legacy warehouses.
  • Workspaces & dbt-native Support: A single pane for SQL, Python, Git-integrated pipelines.

Why it matters:

  • Integration plane arrives: Snowflake isn't just a warehouse—it now competes in the ETL/iPaaS layer with observability, unstructured support, and BYOC options.
  • Legacy migration simplified: Snowflake removes excuses for staying on legacy platforms, especially when performance or interactivity on the data requires data centralization.
  • Future-proofing pipelines: Snowflake is positioning itself as the one-stop shop for modernizing legacy data infrastructure and the AI workflows that come next.
  • With the number of data projects ongoing, data and AI leaders must answer the question: How do we modernize legacy systems without creating brittle pipelines or compromising migration quality?

3. Adaptive Compute: Simpler, Smarter, Cheaper

As the category matures, enterprises naturally shift to prioritize interoperability, simplification, and managed cost against unpredictable AI and analytics workloads. These considerations consistently ranked among the top 5 requests from my conversations with customers and partners during the Summit.

What's new:

  • Adaptive Compute Warehouses: Policy-based runtime management. Snowflake dynamically assigns compute—no sizing or tuning needed.
  • Gen2 Warehouses: 2.1x–4.4x faster performance with new hardware/software blends.
  • Unified Batch & Streaming Support: streaming ingestion with up to 10 gigabytes per second with 0.5 to 10 second latency without building separate pipelines or introducing external tools.
  • Multiple FinOps tools: From reviewing workload performance, monitoring spending anomalies, to setting spending limits for resources based on tags, and minimizing FinOps overhead.

Why it matters:

  • Less infrastructure babysitting: Auto-scaling and intelligent routing reduce platform overhead and FinOps headaches, ensuring performance without waste.
  • Foundation for unpredictable AI: Agentic and ML workloads are bursty and need near-real-time data. Adaptive Compute removes the configuration burden.
  • The question posed by data and AI leaders was how to make platforms more responsive to the growing set of AI and analytics workloads without increasing operational complexity. For more mature leaders, the question was how AI would drive operational efficiency to free resources for the next initiatives.

4. Interoperability & Governance: A Unified Control Plane for the AI Era

With AI agents and self-service analytics growing rapidly, a unified orchestration layer is essential to enforce trust, consistency, and visibility across users, tools, and data products.

What's new:

  • Iceberg & Polaris Interop: Full read/write support for open table formats and external catalogs.
  • Horizon Copilot & Universal Search: Natural language access to lineage, metadata, and permissions to all data unified to Snowflake.
  • AI Governance Frameworks: Native identity, MFA, and audit controls extended to agents and hybrid environments.

Why it matters:

  • Centralize to an open lakehouse to then orchestrate: As data use cases multiply—from self-service analytics to autonomous agents—data and analytic leaders need a single orchestration layer to manage access, visibility, and consistency.
  • Governance as infrastructure: Horizon makes governance user-facing and operational—so rules aren't just enforced, they're discoverable and explainable across all your mapped data assets.
  • Interoperability is execution: Supporting Iceberg, Polaris, and open catalogs ensures flexibility without fragmentation—an essential trait as AI workloads touch more business domains.

Final Take: The Battle for the Data and AI Foundation Is Now in Its Next Phase

Satya Nadella said it best: "We are in the mid-innings of a major platform shift." (NOTE: you can read my summary of MSFT Build 2025 here)

If that's the case, Snowflake just stepped up to the plate and knocked a double into deep Data and AI platform territory.

Snowflake's announcements aren't just analytics with AI features—it's the emergence of a unified execution layer where data, decisions, and intelligent agents converge to support a decision-centric architecture. The battlefield has shifted. It's not about who stores data better—it's about who activates it faster, governs it smarter, and enables AI to drive real outcomes.

With Cortex agents, Adaptive Compute, and OpenFlow integration, Snowflake is betting that the AI-native data foundation will be the next cloud OS. And in this next inning, the winners will be the ones who can orchestrate, not just visualize, enterprise intelligence.

For CDAOs, the implications are practical: Modernization no longer requires trade-offs between trust, scale, and agility. The race is no longer about who can analyze—it's about who can execute. And Snowflake's AI-native data foundation has just moved the line on execution, making it faster, safer, and more complete.

 

If You Want More on Snowflake Summit 2025

Related articles for more information:

  • Watch Holger Mueller and me break down Snowflake Summit 2025: the customer/partner buzz from the floor, the OpenFlow acquisition, and Enterprise implications. Watch the recap here: https://bit.ly/443yPvp
  • Snowflake makes its Postgres move, acquires Crunchy Data: bit.ly/4kvrzOG
Data to Decisions Tech Optimization Innovation & Product-led Growth Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Product Officer

Qualcomm acquires Alphawave for $2.4 billion

Qualcomm acquires Alphawave for $2.4 billion

Qualcomm said it will acquire Alphawave IP Group, a UK company, for $2.4 billion in a move that will give it assets to speed up its data center expansion.

The purchase will give Qualcomm IP in high-speed connectivity and compute for its Qualcomm Oryon CPU and Hexagon NPU processors. In May, Qualcomm outlined its data center CPU efforts and said it will connect with Nvidia's NVLink Fusion initiative.

Qualcomm said it can bring low-power inferencing to data centers as well as custom processors for AI workloads.

Cristiano Amon, CEO of Qualcomm, said Alphawave will bring complementary technology for its CPUs and NPUs. "The combined teams share the goal of building advanced technology solutions and enabling next-level connected computing performance across a wide array of high growth areas, including data center infrastructure," said Amon.

The deal is expected to close in the first quarter of 2026.

Data to Decisions Chief Information Officer

Will Technology Convergence Crush or Celebrate the Contact Center?

Will Technology Convergence Crush or Celebrate the Contact Center?

A contact center at a crossroads is nothing new. It seems that every time there is a technological evolution, the contact center faces disruption. That is especially true in this age of artificial intelligence that is transforming experiences for customers and employees alike. AI has redefined efficiency, encouraging a streamlining of technology estates, data stores and forever shifting where and the how engagement, work and collaboration happens.

The clarity of the high-fidelity signal gathered directly by the contact center from the customer can now be harnessed, analyzed and shared across the enterprise, in large part, thanks to AI. The responses, reactions and skills demonstrated by service representatives that add to the durability of customer relationships can be captured and added to an organization’s knowledge repository. For the contact center, AI has delivered operational efficiencies long dreamed about. As quickly as AI has reinforced the strategic importance of the contact center, it has also invited questions from technology leaders looking to streamline communications technology stacks and avoid overly complicated and customized investments.

This intersection requires a decision and has elevated key questions around need, infrastructure and intention. Why are the “as a service” offerings around communications—contact center as a service (CCaaS), unified communications as a service (UCaaS), marketing automation, sales engagement, customer service management, et al.—so segmented? Why don’t organizations think of customer communications as a form of customer collaboration? Why does it take five platforms, three swivel chairs and endless patience all in the name of customer relationships? Why is this entire vision so complicated, costly and inefficient?

Pressure is mounting to not just justify the costs of doing business, but to reign in the total cost of technology ownership. As AI experimentation gives way to scaled AI implementation, there is an expectation for AI infusion into every aspect of business. Thinking of AI in the organization, let alone the contact center, as an add-on or accessory misunderstands the true power of AI. The expectation is for AI to be woven into the very fabric of work. This demands new strategies around data, workflows and infrastructure…and this demand turns into a pressure that is both top down (as C Suite leaders and Boards ask about AI progress) and bottom up (as employees wonder why AI tools are not as readily available for work as it is in their personal lives).

The unintended outcome of this surge of efficiency is a reexamination of communications stacks and structures, pushing markets once segmented by dialers, inbound or outbound actions and where calls happened to consolidate into more elegant, cloud native and fully connected systems.

The ask has become to focus on how the people who engage most directly and immediately with customers work as opposed to where they work. Instead of discussing in-office versus remote work, the contact center has recentered on people over places. This new strategy for efficiency looks at how work is done and how that work can be enhanced and decisions accelerated thanks to automation and AI. While service reps are being empowered with seamless automated support to their work, customers are being encouraged to engage at their will, in the channels of their choosing. The goal of the contact center should be to streamline the work of the service rep to intentionally take the work out of being a customer.

Advancing the Contact Center With AI

While technology convergence is inevitable, the contact center won’t be destroyed. Convergence has, however, made choosing a path forward an imperative. So where can contact center leaders start down the right path?

Rethink from the outside in. Despite its absolute and critical role in customer engagement and experience delivery, the contact center is often developed as an inside out strategy, focusing on meeting operational goals that traditionally put the business at the center and work outwards to mold the customer’s experience around those goals. This is where a legacy mindset of shorter call times, call deflection and other organization-first goals and business outcomes have won. But now, thanks to AI, these same operational efficiency goals can be achieved while putting the customer at the center of decision making and strategy.

No customer wants to spend MORE time on the phone with a service representative. Speed and efficiency in managing simple concerns and requests is just as important to the customer as it is to the service team. With generativeAI, this speed of decisions and engagements in context can happen in real time, in the customer’s context and fully attune to the customer’s journey. The partnership between service reps and their Copilots are immersive and conversational, with a capacity to deeply understand the customer, the business and the individual service rep to boost productivity while simultaneously boosting engagement and customer service.

Modern examples of this include Microsoft’s Copilot, including Microsoft’s AI-native portfolio for Service. The Copilot capabilities embed across Dynamics 365 Customer Service and Dynamics 365 Contact Center, integrating with CRM and enterprise data resources to assist service reps and streamline workflows. Rather than disrupting work because of a customer, these workflows carry the customer’s needs, expectations, history and voice into the organization and turns that into powerful assistance and agentic processes.

Turn customer obsession into a passion for value. There is often a mantra that everyone in business should be “obsessed” with the customer. But as in life, obsession can go horribly wrong as it presents as fixation or delusion. Instead, the contact center has a unique opportunity to leverage its keen understanding of the customer and context to establish workflows and automations that focus on how the organization can both proactively and reactively deliver value.

Autonomous service works best when the business outcome and customer value exchange is aligned. Call deflections hold value to the business, shifting customers into more cost-effective self-service digital experiences. But they are only a valued experience if the customer achieves their goals in a manner they expect. Architecting value-first autonomous workflows thinks from the customer and tracks their engagements back to the contact center, understanding where, how, when and why the customer is engaging. To just be obsessed with a customer could mean knowing everything about that person, but it does not guarantee the ability or capacity to act. Value delivery is rich with empathy but also compels the service rep and the business to help change the status quo of the customer.

Become the center point for market and customer knowledge. Knowing more about the customer and their definition of value shouldn’t be locked away in a contact center solution. Thanks to AI’s capacity to ingest, curate and contextualize customer conversations to better understand the meaning, intention and reality of a customer’s connection, contact centers have become exceptionally comfortable and confident in their ability to synthesize customer voice into a real-time intelligence asset.

Automating time-consuming tasks like call summaries and follow up emails is the performance driving starting point. This is the opportunity to establish an enterprise-wide strategy where AI can surface unified intelligence across the entire organization, presenting in service rep should have access to shipping and supply chain information that impacts the customer, supply chain and shipping should have access to information around potential points of customer friction and expectations. The sharing of contact center driven intelligence should be a bi-directional exchange across the modern enterprise.

Shifting the Conversation from Convergence to AI Acceleration

The future of the contact center deserves to be easier, with less heavy lifting and less wear and tear on the people brought to power experience strategies. AI has the capacity to help lift that load and deliver the efficiency and productivity the contact center has always expected and craved. But that is just the first stage of AI maturity and value! There is much more to be achieved, especially as the technology paths and platforms continue to converge. As the walls come down between internal and external communications systems, this concept of an experience of collaboration can be achieved. Shared intelligence, shared understanding and shared experience delivery doesn’t need to be trapped in an individual system or dependent on a single interface or presentation layer. Insight, intelligence and the manifestation of the customer hosted in the form of data can and should easily intersect with the knowledge an organization curates about the business and about the products being consumed.

Thanks to AI the current technology convergence can be an opportunity for simplification and not an assured fate of collapse or pressure-induced failures. Collaboration and communication can intertwine and accelerate the value all parties realize. The real beauty of AI is that thanks to its capacity to ingest, analyze and normalize complex data sources and types, it can extrapolate far beyond human capacity. So let the convergence begin! May it not kickstart an era of rip-and-replace or the stagnation and fear that some of the recent cloud migrations and infrastructure modernizations of the past revealed. Instead, let this vision of communications, collaboration, AI and the customer help make this adventure called the work of service be just a bit easier, more valuable and seamlessly connected.

Data to Decisions Future of Work Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Executive Officer Chief People Officer Chief Information Officer Chief Digital Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

RPA and those older technologies aren’t dead yet

RPA and those older technologies aren’t dead yet

Robotics process automation (RPA) isn't dead and arguably should get more attention in your enterprise automation portfolio. Simply put, RPA still has a role in an overarching AI strategy and in many cases can be good enough. Keep that in mind for old school AI, machine learning, process automation and even hybrid cloud.

We’ll stick to RPA for now, but the general theme remains: Automation and returns are what matters. And you don’t have to rely solely on heavy compute, reasoning large language models to get there.

Rest assured that most vendors aren't going to preach the benefits of RPA, but orchestration is about more than AI agents. Process, automation and workflow matter. Use the tool for the right job. If a rule-based approach like RPA works use it to deliver returns. Ditto for workflow engines, traditional AI and any other non-agentic technology. For things you already know about there’s no reason to force an LLM to reason repeatedly.

This theme has been bubbling up in recent months amid the agentic AI marketing barrage. CxOs in Constellation Research's BT150 have been noting that RPA is part of the generative AI mix and some acknowledged that the technology may be good enough in many use cases. In fact, older technologies—traditional AI and machine learning—are good enough to deliver significant value.

Marianne Lake, CEO of JPMorgan Chase's Consumer & Community Banking unit, set the scene during the bank’s annual investor day: "Despite the step change in productivity we expect from new AI capabilities over the next five years, we have been delivering significant value with more traditional models. Not every opportunity requires Gen AI to deliver it."

Vendors that play to value instead of the buzzwords seem to be faring well.

During his PegaWorld keynote, Pegasystems CEO Alan Trefler said orchestrating automation, workflows and AI is about finding the right tool for the task at hand. Yes, Pegasystems has RPA, but didn't mention it during PegaWorld or on recent earnings calls. The company has been riding its GenAI Blueprints for revenue growth and launched the Pega Agentic Process Fabric as well as a blueprint that’ll help you ditch legacy infrastructure.

Overall, Pegasystems broad platform includes decisioning, workflow automation and low code development.

Trefler said agentic AI is being talked about extensively due to "magical marketing." He said that there will be thousands of AI agents running around in enterprises and leading to sprawl and no orchestration.

"The right AI for the right purpose is absolutely critical and candidly forgotten by the pundits that just want to dump things in and hope everything goes right. Large language models aren't everything. Languages models are great for some stuff, but for other things you want to use other forms of AI," said Trefler.

Trefler, like Philipp Herzig is Chief AI Officer and Chief Technology Officer at SAP, argue that prompt engineering is dead. Why? Semantic approaches leave too much uncertainty for processes that need to be followed repeatedly. And you don’t need agentic AI to do everything because it’ll make your costs balloon just on energy consumption.

"If you go down the philosophy of using the GPU to do the creation of the workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because it's not reasoning. It's all this reasoning. You don't have to reason on things you already know about," said Trefler.

The upshot is that old-school AI, machine learning and rule-based RPA can be used in a comprehensive automation strategy. In other words, AI agents and genAI simply think (reason) too much.

Francis Castro, head of digital and technology customer operations at Unilever, said at PegaWorld: "I'm a technologist 25 years in the company, driving technology. Sometimes you fall in love choosing the right tool and the right technology. Sometimes we fall in love with the technology, or we fall in love with the problem, but we forget about what we want to achieve."

The takeaway from Pegasystems is that workflows, automation, agentic AI, process and RPA all go together. It's one continuum. ServiceNow sings from the same hymn book, but sure does talk a lot about AI agents today. In the end, RPA is good enough to automate specific, repetitive and rule-based tasks.

"While the market has been talking about Agentic AI and overloading buyers with a laundry list of AI agents, bots and orchestrators, Pega has been focused on the underlying processes and workflows that have long been their bread and butter," said Liz Miller, analyst at Constellation Research.

Systems integrators are busy building AI agents and have shown they're better at it than vendors. But the big picture for these integrators is automation of processes for customers and internally for business process management.

Verint is another vendor playing the value game. The company's first quarter results were better than expected as customers used its AI bots to automate specific processes without the need to change infrastructure or platforms.

Verint CEO Dan Bodner said: "First, more and more brands are fatigued by the AI noise and are looking for vendors that can deliver proven, tangible and strong AI business outcomes now. And second, brands are looking for vendors with hybrid cloud that can deploy AI solutions with no disruption and with a show me first approach."

The CX automation company's secret sauce is delivering value over talk about orchestration, LLMs and agentic AI protocols. "More brands are fatigued by the AI noise and are looking for vendors that can deliver proven, tangible and strong AI business outcomes now. And second, brands are looking for vendors with hybrid cloud that can deploy AI solutions with no disruption and with a show me first approach," said Bodner.

Here's a look at a Verint slide on returns for use cases:

In fact, Bodner mentioned LLMs just twice at the very end of Verint's earnings call. Verint is agnostic about the LLMs it uses for its bots, which were mentioned 24 times. The game for Verint is to automate "micro workflow" in various processes to deliver returns. Verint bots use Verint's Da Vinci AI and are designed to automate tasks and workflows. Verint bots are focused on specific tasks instead of being general purpose.

“Verint’s bet that simple is better paid off,” said Miller. “While everyone was ramping up the hype machine around AI Verint dug into the idea that all of these innovations were great, but outcomes were better. So their simplified preset bot approach where you start with the intended outcome and then connect the automation dots with automation, skills and workflows works.”

Like Pegasystems, Verint doesn’t talk about RPA anymore. In 2019, Verint talked a lot about RPA and has plenty of docs about the technology lying around.

For UiPath CEO Daniel Dines, the company's RPA heritage is an advantage and he hasn’t banned the acronym yet. "Our extensive installed base of robots and AI capabilities already operating autonomously across more than 10,000 customers gives us unparalleled insight into real enterprise processes and workflows where agents are a natural extension," said Dines on UiPath's fiscal first quarter earnings call. "We uniquely bridge deterministic automation or RPA and probabilistic automation or agentic, allowing customers to extend automation into more complex adaptive workflows."

UiPath has been building out its automation platform and launched UiPath Maestro, which aims to leverage AI agents, RPA and humans to orchestrate processes. UiPath’s first quarter results were better-than-expected due to traction for its agentic automation platform.

"There is a tremendous benefit of combining AI agents with robots. And when you go and decide on an AI genetic automation platform, it's a natural way to think maybe we should bring the robots into the same platform," said Dines. "Again, the benefits from the security and governance perspective and having agents and robots and managing humans also in the same platform are tremendous."

Naturally, Dines is going to say RPA has a role in automation given UiPath has a significant legacy business tied to the technology. However, I don't think he's off. AI agents aren't needed for everything when a robot will do fine. And none of these agents are going to work without process intelligence.

CxOs need to deliver value and chasing agentic AI when there are other tools that provide returns faster isn't a great blueprint. Is RPA going to see a renaissance? Probably not. But RPA is definitely worth keeping in the automation toolbox. A lot of those older, less buzzworthy technologies should stick around too.

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Revenue & Growth Effectiveness Chief Information Officer

Celonis, SAP reach data access cease fire amid litigation

Celonis, SAP reach data access cease fire amid litigation

SAP has agreed to not interfere with Celonis' data extractor for customer data until the litigation between the two companies has been resolved. In turn, Celonis has withdrawn its motion for a preliminary injunction.

As previously reported, the Celonis lawsuit against SAP is worth watching given agentic AI largely depends on enterprise technology vendors providing data access to other applications--and AI agents.  

SAP said: "SAP rejects Celonis’s claims and continues to seek dismissal of the case. In the meantime, SAP has agreed to maintain the status quo with Celonis and avoid confusion for the benefit of SAP’s customers. SAP will continue to evaluate its IP rights and take action as appropriate."

In March, Celonis sued SAP in federal district court in San Francisco alleging that SAP used anticompetitive practices to thwart Celonis' process mining applications. SAP owns Signavio, a process mining tool that's often combined with the ERP giant's platform.

Celonis alleged that SAP is using its ERP dominance to block Celonis in the process mining market.

Under the agreement, SAP will enable Celonis customers to access their own SAP data without additional fees or licenses.

Related:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity SAP Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

P&G outlines Supply Chain 3.0, next digital transformation moves

P&G outlines Supply Chain 3.0, next digital transformation moves

Procter & Gamble's two-year restructuring includes a heavy dose of digital transformation, artificial intelligence and supply chain automation and optimization.

The headlines from Procter & Gamble's appearance at the Deutsche Bank Global Consumer Conference revolved around the company's move to cut 7,000 jobs. However, P&G Chief Financial Officer Andre Schulten and Chief Operating Officer Shailesh Jejurikar outlined the bigger picture.

P&G is navigating an uncertain economy, shifting tastes, inflation and tariffs. P&G's approach is to use productivity to fund growth initiatives over the years. Its next phase is focused on developing its systems and digital capabilities to support automation, data strategy, insights and analytics, said Schulten.

"We believe we now have the opportunity to step forward to enable the tremendous growth opportunities we have with an even more focused and efficient portfolio, supply network, and organization," said Schulten. "In fiscal 2026, we will begin a 2-year noncore restructuring program. This program includes 3 interdependent elements: portfolio choices, supply chain restructuring and organizational design changes."

P&G plans to shed brands in various categories and categories will be outlined more in July when the company reports its fiscal year end results.

Here's a look at the high level transformation moves from P&G:

  • The company will right size its supply chain, optimize by locating production to drive efficiencies, speed up innovation and cut costs.
  • P&G will make roles broader and shrink teams as the company leverages automation and digitization.
  • P&G will cut up to 7,000 non-manufacturing roles, or 15% of its non-manufacturing workforce. Those job cuts are incremental to what P&G has outlined for its Supply Chain 3.0 savings.

"This restructuring program is an important step toward ensuring our ability to deliver our long-term algorithm over the coming 2 to 3 years. It does not, however, remove the near-term challenges that we currently face," said Schulten. "All the more reason to double down now on the integrated growth strategy that has enabled strong results over the past 6-plus years, executing our strategy and accelerating this opportunity, especially under pressure is our path forward."

According to Schulten, future success for P&G requires innovation for product, packaging, brand communication and marketing and retail execution. Schulten said all of those variables have to work together to deliver value. No one thing can carry the team. He explained:

"Superior performing products and superior packages provide noticeably better benefits to consumers. They become aware of and learn about these products through superior brand communication. This comes to life in stores and online with superior retail execution and deliver superior consumer value at a price that is considered worth it across each price tier in which we choose to compete."

Here's a look at the P&G plan:

Fund investment with productivity gains. Schulten said P&G can mitigate cost and currency headwinds by becoming more efficient across cost of goods sold to marketing to in-store and online execution.

Data and insights across the value chain. Jejurikar said the company was looking for "better consumer insights on what's required for the specific job to be done; integrated technical capabilities applied across formulated chemistry, assembled products and devices to deliver the superior solution; integrated communication across package, shelf, online and other channels; and at a value that balances price and performance for the consumer and the retailer."

Use AI to make the marketing and advertising spend more efficient. Jejurikar said the company is using programmatic and algorithm-based media buying to find consumers most likely to be receptive to messaging. "Our proprietary Consumer 360 data platform enables brand to use target audience algorithms to serve ads at the right frequency each week, all year round, more effective reach and more cost efficient," he said. P&G's media reach in the US has grown to 80% from 64% over five years. Reach in Europe is now 75%.

He added:

"We are driving advertising effectiveness, starting with superior consumer insights and leveraging AI as a tool to deliver superior content creation. We're improving efficiency using AI tools for ad testing improving quality, cost and speed. Ads can now be tested and optimized in just a few days versus weeks at 1/10 of the cost versus prior methods."

In-store optimization via data and computer vision. Jejurikar said P&G was combining point-of-sale data with millions of retail shelf images to optimize shelf design. P&G has proprietary tools that enable the company to analyze assortment online and offline.

Supply Chain 3.0. Jejurikar said P&G is looking to extend that supply chain data from suppliers to customers to retail shelves. "We are investing in advanced supply planning technologies to better anticipate consumer demand and adjust production and inventory levels, accordingly, helping minimize stock-outs overproduction and waste," said Jejurikar.

The supply chain transformation will have a heavy dose of automation in manufacturing sites. P&G is capturing images and visual data on manufacturing lines to improve quality.

According to Jejurikar, P&G's warehousing center of excellence will be the hub for the company's 50 distribution centers. This hub will coordinate warehouse activity from the moment a truck enters the gate until it leaves. This coordination is improving productivity 50% on indirect administrative work at each site.

Jejurikar also outlined the KPIs for Supply Chain 3.0:

  • 98% on-shelf and online availability.
  • Up to $1.5 billion before tax in gross productivity savings each year.
  • 90% of free cash flow productivity.

Those targets are in addition to savings of $1.5 billion in cost of goods sold and $500 million in marketing previously outlined.

Data to Decisions Innovation & Product-led Growth Marketing Transformation Matrix Commerce Next-Generation Customer Experience Revenue & Growth Effectiveness Supply Chain Automation Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software IoT Blockchain ERP Leadership Collaboration M&A Chief Information Officer Chief Supply Chain Officer Chief Experience Officer

Broadcom delivers strong Q2 on AI networking demand

Broadcom delivers strong Q2 on AI networking demand

Broadcom reported strong second quarter results amid "robust demand for AI networking."

The company, which is thriving as hyperscale cloud providers buy processors for AI workloads, delivered second quarter earnings of $4.96 billion, or $1.03 a share, on revenue of $15 billion, up 20% from a year ago. Non-GAAP earnings were $1.58 a share, two cents ahead of Wall Street estimates.

Broadcom CEO Hock Tan said AI revenue in the second quarter was up 46% from a year ago to $4.4 billion. "We expect growth in AI semiconductor revenue to accelerate to $5.1 billion in Q3, delivering ten consecutive quarters of growth, as our hyperscale partners continue to invest," said Tan.

CFO Kirsten Spears the company free cash flow was a record $6.4 billion, up 44% from a year ago.

Semiconductor revenue in the second quarter was $8.41 billion, up 17% from a year ago. Software infrastructure revenue, led by VMware, was $6.6 billion, up 25% from a year ago.

As for the outlook, Broadcom projected third quarter revenue of $15.8 billion with adjusted EBITDA of at least 66% of projected revenue.

Constellation Research analyst Holger Mueller said:

"Broadcom Is firing on all cylinders breaking the $15 bilion milestone in revenue by a hair. The demand for AI keeps growing Broadcom's semiconductor and infrastructure software businesses. It's good to see an extra $400 million invested into R&D, even if it cost Broadcom some revenue. CEO Hock Tan and team know Broadcom needs to invest into product to keep the growth going."

On the conference call with analysts, Tan said the following:

  • "Custom AI accelerators grew double digits year-on-year, while AI networking grew over 170% year-on-year. AI networking, which is based on Ethernet was robust and represented 40% of our AI revenue. As a standard-based open protocol, Ethernet enables one single fabric for both scale out and scale up and remains the preferred choice by our hyperscale customers. Our networking portfolio of Tomahawk switches, Jericho routers and NICs is what's driving our success within AI clusters in hyperscalers."
  • "We continue to make excellent progress on the multiyear journey of enabling our 3 customers and 4 prospects to deploy custom AI accelerators. As we had articulated over 6 months ago, we eventually expect at least 3 customers to each deploy 1 million AI accelerated clusters in 2027, largely for training their frontier models. And we forecast and continue to do so a significant percentage of these deployments to be custom XPUs. These partners are still unwavering in their plan to invest despite the certain economic environment."
  • "There's no differentiation between training and inference in using merchant accelerators versus custom accelerators. I think the whole premise behind going towards custom accelerators continues, which is it's not a matter of cost alone. It is that as custom accelerators get used and get developed on a road map with any particular hyperscaler, there's a learning curve, a learning curve on how they could optimize the way the algorithms on their large language models gets written and tied to silicon."
  • "Why inference is very hot lately--we're only selling to a few customers hyperscalers with platforms and LLMs--. is these hyperscalers and those with LLMs need to justify all the spending they're doing. Doing training makes your frontier models smarter. There's no question. Make your frontier models by creating very clever algorithms that consumes a lot of compute for training smarter. You want to monetize inference. And that's what's driving it. Monetize. To justify a return on investment on training you create a lot of AI use cases, and consumption through inference. And that's what we are now starting to see among our small group of customers."
  • Customers are increasingly turning to VCF (VMware Cloud Foundation) to create a modernized private cloud on-prem, which will enable them to repatriate workloads from public clouds while being able to run modern container-based applications and AI applications. Of our 10,000 largest customers, over 87% have now adopted VCF."
Tech Optimization Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer