Results

Nvidia outlines EU AI expansion, ecosystem, sovereign models

Nvidia outlined plans to scale AI factories and research hubs in Europe, expanded partnerships with Schneider Electric and Siemens and expanded model choices for sovereign AI and NIM Microservices.

Those high-level headlines from Nvidia GTC Paris were part of a broader stream of updates for the AI market in Europe, where Nvidia has more than 1.5 million developers. Nvidia also announced that European enterprises are adopting agentic AI including Novo Nordisk, Siemens, Shell, BT Group, SAP, Nestle, L'Oreal and BNP Paribas. The company also touted adoption of its Nvidia Drive autonomous vehicle platform at Volvo, Mercedes Benz and Jaguar as well as quantum computing efforts in the region.

Nvidia Jensen Huang said during the GTC Paris keynote:

"Europe has now awakened to the importance of these AI factories, and the importance of the AI infrastructure.  I'm so delighted to see so much activity here. This is just the beginning."

Dion Harris, Senior Director of HPC and AI Factory Solutions at Nvidia, said:

"We're deeply integrated with upskilling and education, working with all of the top higher education and research institutions and the global systems integrators, Europe is poised to be a powerhouse in this new industrial revolution. The only thing is missing is infrastructure. Today, every nation needs to build AI infrastructure, and every company needs to build an AI factory."

Here's a breakdown of what Nvidia announced:

  • Schneider Electric and Nvidia expanded a partnership designed to accelerate the deployment of AI factories. The two companies will collaborate on reference designs, simulation, design and layout, infrastructure and architecture for AI factories. The two companies will also look to scale production of cooling systems in Europe and 800 volt direct current architectures.
  • A roster of European supercomputing centers and cloud service providers building Nvidia-based AI infrastructure.
  • European Nvidia AI Technology Centers in Finland, Sweden, Germany, UK, France, Italy and Spain. Nvidia is working with Italy to advance its sovereign AI efforts.
  • DGX Cloud Lepton integration with Hugging Face Training Cluster as a Service. Nvidia also said it is working with EMEA model builders and offering sovereign AI models via Perplexity Pro. According to Nvidia, each country in EMEA needs strong models that reflect each nation's unique language and culture and operates in region.
  • Nvidia is working with Mistral AI to build a cloud platform powered by 18,000 Grace Blackwell systems.
  • As part of that expanded model selection, Nvidia said NIM Microservices will have access to more than 100,000 open and custom models via Hugging Face public and private LLMs.
  • NeMo AI agent additions including AI Safety Blueprint, Data Flywheel and Agentic AI Toolkit. Nvidia added that NIM and NeMo will be integrated into SAP Business AI.
  • Siemens and Nvidia will expand their partnership to accelerate AI capabilities in manufacturing with a focus on product design and engineering, production optimization, operational planning, digital twins and industrial edge computing. Nvidia's various libraries for CUDA X, RTX and Omniverse will be integrated into Siemens product portfolio.

  • Nvidia also announced how its GB200 NVL72 system is powering quantum computing workloads and simulations with European enterprises and research hubs.

Also see:

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI: New models, and chasing Altman’s superintelligence dream

OpenAI released o3-pro for ChatGPT Pro and Team users in what it calls its most capable model yet as it cut the prices for o3 by 80%. The moves come as OpenAI CEO Sam Altman ponders 2030 where there limitations of energy will lead to AI superintelligence that's almost free.

That's a mouthful, but Altman and OpenAI are arguing that we're at an event horizon, an inflection point and probably a lot of revenue growth. For enterprises, the takeaway is that OpenAI expenses may be declining in exchange for volume. OpenAI's enterprise business surging, says Altman

Let's recap the headlines:

  • OpenAI dropped o3 pricing by 80%. For developers, o3 may not be the latest and greatest, but it'll be good enough for many use cases.
  • OpenAI is scaling as it breaks away from its Microsoft partnership. Reuters reported that OpenAI is going to use Google Cloud for compute. That move would give OpenAI a multi-cloud approach that should meet its needs better than an exclusive with Microsoft Azure.
  • OpenAI launched o3-pro. The company said: "In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy."

That barrage of headlines, however, are overshadowed by Altman's blog, which laid out his latest thoughts on AI superintelligence, energy consumption and how much a ChatGPT consumes in resources per query today.

The post is worth a read. Here are a few takeaways.

Energy will be plentiful. Altman: "In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else."

But that ChatGPT usage isn't killing the environment today. Altman: "As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)"

The self-reinforcing loops have already started and what was novel months ago is now routine. Altman: The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off."

Humans will adapt: Altman: "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big."

Altman does note the challenges. He said society will have to solve "the alignment problem" where we can guarantee "AI systems learn and act towards what we collectively want." Altman said society will also have to make sure superintelligence is cheap and not concentrated with any one person. The world needs to start a conversation about what the boundaries are and get aligned.

My take

  1. Altman's take that cost and scale will be solved is believable, but we can debate the timeline for sure. Can energy grids be revamped in 5 years?
  2. The concept that society is going to have a reasonable discussion about superintelligence and get alignment on what we call collectively want from AI is naive if not batshit crazy. Governments on a global basis barely function now and there's a shortage of consensus.
  3. Societal impacts are glossed over throughout the post. Altman's take that humans will adapt may apply to a sliver of the population.
  4. This quote made me chuckle: "In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes."
  5. This quote struck me as blasé: "We will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines."
  6. Either way, Altman's right that potentially wonderful and wrenching change is coming. Both can be true.
Data to Decisions Future of Work Innovation & Product-led Growth Chief Information Officer

CR CX Convos: Live from PegaWorld 2025 with Matt Nolan

Don't miss the latest CR CX Convo covering the future of marketing decisioning...

Tuning in LIVE from #PegaWorld2025, Constellation analyst Liz Miller and Pegasystems' Matthew Nolan discuss the evolution of #marketing beyond traditional campaigns. 

Key takeaways:

📌 Marketing is shifting from sales-driven to customer-outcome focused
📌 AI and decisioning are transforming how brands engage customers
📌 The goal: Create relevant, personalized experiences that truly matter

Marketers aren't just sending campaigns anymore - they're becoming strategic growth architects who leverage data and AI to drive meaningful connections.

Watch the full conversation!

Marketing Transformation Future of Work Innovation & Product-led Growth Chief People Officer Chief Marketing Officer Chief Digital Officer On cx_convos <iframe width="560" height="315" src="https://www.youtube.com/embed/U6i2Z73n1sI?si=pKj4S69HEWlFy_a0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Cisco tunes network portfolio, gear for AI agents, introduces AgenticOps

Cisco outlined a series of AI infrastructure, security products and software designed to support AI agents, hyperscale data centers and enterprises for various workloads.

The upshot from Cisco Live in San Diego is that the networking giant is reordering its stack for AI workloads.

Cisco outlined AgenticOps. The company said AgenticOps is its AI-driven approach to running operations including telemetry, automaton and domain knowledge. Cisco AgenticOps is powered by Deep Network Model, a network-focused LLM, and Cisco AI Assistant, which identifies issues, root causes and automates workflows.

The company also launched AI Canvas, an interforce for customer dashboards enables collaboration between network, security and dev operations to collaborate and optimized.

Cisco delivers strong Q3 amid AI infrastructure, security traction

Jeetu Patel, President and Chief Product Officer, Cisco, said: "As billions of AI agents begin working on our behalf, the demand for high-bandwidth, low latency and power efficient networking for data centers will soar."

Here's a look at what Cisco announced at Cisco Live:

  • Unified management of Cisco platforms including ACI, NX-OS and other systems with dashboards, policies and controls. Cisco launched the Unified Nexus Dashboard, which consolidates services across all services.
  • Cisco Intelligent Packet Flow, which steers traffic using real-time telemetry and congestion data across AI networks. The service has visibility across networks, GPUs and distributed AI jobs.
  • Cisco and Nvidia are unifying architectures and outlined their first technical integration of Cisco G200-based switches and Nvidia NICs. The companies also demonstrated Nvidia Spectrum-X Ethernet networking based on Cisco Silicon One.
  • The company expanded AI PODs to support Nvidia's release cadence. Nvidia RTX Pro 6000 Blackwell Server Edition GPU is available to order with Cisco's UCS C845A M8 servers. Cisco and Nvidia will work together on validated systems for the Cisco Secure AI Factory with Nvidia.
  • Cisco AI Defense and Cisco Hypershield are now included in the Nvidia Enterprise AI Factory validated design.
  • Cisco AI Defense can secure AI agents with open models and optimized with Nvidia NIM and NeMo microservices.
  • The company has embedded its security offerings into its networking gear. Cisco has embedded zero trust and observability into the network, added a new generation of firewalls (6100 Series, 200 Series) and tightened integration with its Splunk unit.
  • Cisco is bringing together Meraki and Catalyst into one unified management platform for next-gen wireless, switching, routing and industrial networks across all platforms.
  • ThousandEyes and Splunk are now integrated for network to application visibility.
  • Cisco launched new Cisco C9350 and C9610 Smart Switches for campus networks and 8100, 8200, 8300, 8400 and 8500 Secure Routers, which has integration with Cisco's security portfolio.
  • The company launched Cisco Wireless 9179F Series Access Points for campus networks.
  • Cisco rolled out a series of rugged switches for industrial AI use cases.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Tech Optimization cisco systems Chief Information Officer

IBM outlines quantum computing roadmap through 2029, fault-tolerant systems

IBM updated its quantum computing roadmap heading into IBM Quantum Starling, a large-scale fault-tolerant quantum system in 2029.

Big Blue said IBM Quantum Starling will be delivered by 2029 and installed at the IBM Quantum Data Center in Poughkeepsie, New York. That system is expected to perform 20,000 times ore operations than today's quantum computers.

For IBM, Quantum Starling will be the headliner of a fleet of quantum computing systems. IBM CEO Arvind Krishna said the company is leaning into its R&D to scale out quantum computing for multiple use cases including drug development, materials discovery, chemistry, and optimization. IBM also recently outlined flexible pricing models for quantum computing to expand usage and upgraded its Quantum Data Center to its latest Heron quantum processor.

The news lands as quantum computing players outline plans to scale organically or via acquisition. IonQ just announced its plans through 2030 and quantum computing vendors have been laying out plans throughout 2025.

IBM said Starling will be able to run 100 million quantum operations using 200 logical qubits. A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit’s worth of quantum information. Quantum computers need to be error corrected to run large workloads without fault.

Starling will also be a foundation system for IBM Quantum Blue Jay, which will be able to run 1 billion quantum operations over 2,000 logical qubits.

To get to fault tolerant scale, IBM is building an architecture that is fault tolerant, able to prepare and measure logical qubits, apply universal instructions and decode measurements from logical qubits in real time. This architecture, which was outlined in two research papers, also has to be modular and energy efficient.

Here's how IBM is going to get to Starling and beyond:

  • 2025: IBM Quantum Loon will launch to test architecture components for quantum low-density parity check (qLDPC) codes, which reduce the number of physical qubits needed for error correction and cuts overhead by about 90%.
  • 2026: IBM Quantum Kookaburra will feature a modular processor to store and process encoded information and combine quantum memory and logic operations.
  • 2027: IBM Quantum Cockatoo, will feature two Kookaburra modules that will link quantum chips together like nodes in a larger system.

Holger Mueller, an analyst at Constellation Research, said:

"Sometime in the last 6 months quantum vendors realized that they will not be able to produce enough qubits for real world use cases and are focusing on error correction. What is unique with IBM is that it's modular approach has led to the realization that there are challenges to overcome when putting multiple quantum computers together, hence a roadmap change and a focus on qLDPC based couplers, with 'Loon', coming this year. Next year will then be the showcase and proof point that all of this works with IBM Quantum Kookaburra. Kudos go to IBM for laying out its roadmap further, all the way to its Starling system, allowing CxOs to align their quantum uptake plans."

More IBM Quantum:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth IBM Quantum Computing Chief Information Officer

IonQ acquires Oxford Ionics for $1.07 billion, gets quantum-on-a-chip technology

IonQ said it will acquire UK's Oxford Ionics in a deal valued at $1.075 billion in mostly stock and $10 million in cash. The deal is designed to accelerate IonQ's quantum computing roadmap and establish a global hub for research and development.

The purchase is IonQ's largest to date. Niccolo de Masi, IonQ CEO, said in a statement that the Oxford Ionics purchase will " set a new standard within quantum computing and deliver superior value for our customers through market-leading enterprise applications."

According to IonQ, the Oxford Ionics will bring complementary technology to the company. IonQ focuses on trapped ion systems and Oxford Ionics holds world records in fidelity, which measures the accuracy of quantum applications. The game plan for the combined company is to provide an integrated quantum computing stack that features IonQ quantum computing, applications and networking with Oxford Ionics ion-trap technology, which is manufactured on standard semiconductors.

Oxford Ionics ability to bring ion-trap-on-a-chip to IonQ and enable the combined company to "accelerate IonQ’s commercial quantum computer miniaturization and global delivery." Oxford Ionics founders, Dr. Chris Ballance and Dr. Tom Harty, are expected to remain with IonQ after the acquisition is completed.

In an SEC filing, IonQ said it will issue up to about 35 million new shares to pay for the deal. The company said:

"The number of shares of Common Stock to be issued will not be less than 21,143,538 or more than 35,241,561. The final number of shares of Common Stock to be issued as Transaction Consideration will be calculated using the volume-weighted average price for shares of Common Stock for the 20 trading days immediately preceding, but not including, the third business day prior to the date of the Closing, but will not be more than $50.37 per share or less than $30.22 per share."

Ballance said Oxford Ionics' quantum chip can be manufactured in standard semiconductor fabs. "We look forward to integrating this innovative technology to help accelerate IonQ’s quantum computing roadmap for customers in Europe and worldwide," said Ballance.

IonQ has been on an acquisition spree of late with an emphasis on quantum networking. The acquisition of Oxford Ionic is designed to bring scale to compute and use cases in materials science, drug discovery, logistics, financial modeling and defense.

Here's a look at IonQ's acquisitions.

IonQ has also been expanding its global footprint and Oxford Ionics will be a beachhead in the UK.

Oxford Ionics outlined its roadmap last month with a plan that features enterprise grade quantum computing by 2027. IonQ's roadmap features a similar timeline and will bring an established customer base and sales team to better commercialize Oxford Ionics.

In a statement, IonQ said the combined company plans to build systems with 256 physical qubits with 99.99% accuracy by 2026 and scale to 10,000 physical qubits with logical accuracy of 99.99999% by 2027. Ultimately, IonQ wants to hit 2 million physical qubits in quantum computing by 2030.

IonQ and Oxford Ionics held a technology overview call featuring Ballance and Dr. Mihir Bhaskar, CEO of Lightsynq, which was acquired. De Masi touted a recent use case with Astra Zenica, IonQ, Nvidia and AWS and added that Oxford Ionics will accelerate commercial usage.

"IonQ, Lightsynq and Oxford Ionics will create the winning quantum computer in each year and every era of quantum computing," said de Masi.

Dean Kassmann, SVP of engineering and technology at IonQ, said the company's latest acquisitions will "represent a significant acceleration of our planned development work to realize our vision to build the world's best quantum computers to solve the world's most impactful and complex problems."

Kassmann also outlined IonQ's roadmap with Oxford Ionics in the fold.

Ballance said Oxford Ionics approach to leveraging existing technologies to go along with quantum computing will be scalable. Ballance said:

"We have a clear path to apply this to systems with 10s of 1000s of qubits in a single chip, and we've been to be working on our 256 qubit quantum processor units. Our technology allows us to scale devices to millions of qubits by building bigger and bigger chips. But what's more, these chips can be networked by photonic interconnects to allow for distributed compute."

Constellation Research analyst Holger Mueller said:

"The road to commercial quantum uses cases goes this way: (a) more qubits, (b) better error correction or (c) a combo of both. For a decade the industry was squarely rooted in more qubits. More recently it's about error correction, which means that vendors think they have sufficient qubits. IonQ is the perfect example of saying it's a combo of both to enable more sophisticated quantum use cases."

Data to Decisions Tech Optimization Innovation & Product-led Growth Quantum Computing Chief Information Officer

Apple's WWDC 2025: Apple Intelligence leaves a void as execs go redesign happy

Apple executives acknowledged at the company's Worldwide Developer Conference (WWDC) that Apple needs more time to make Apple Intelligence work well. In the meantime, Apple executives outlined Liquid Glass, a redesign that'll flow through Apple devices.

The company also announced new naming conventions for iOS, watchOS, tvOS, macOS, visionOS and iPadOS.

If anything, Apple's developer keynote highlighted how Apple Intelligence, outlined in 2024 with great fanfare, has fallen short of expectations. Craig Federighi, SVP of Software Engineering at Apple, noted how Apple Intelligence did ship features including email and notification summarization, notes, smart replies and ways to clean up video and photos.

Federighi said:

"We delivered this while taking an extraordinary step forward for privacy and AI with private cloud compute, which extends the privacy of your iPhone into the cloud so no one else can access your data, not even Apple. We also introduced enhancements that make Siri more natural and more helpful, and as we've shared, we're continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year."

He said that Apple Intelligence will get more languages, but the approach is incremental for now. "We're making the generative models that power Apple intelligence more capable and more efficient, and we're continuing to tap into Apple intelligence in more places across our ecosystem. Throughout today's presentation, you'll see new Apple intelligence features that elevate your experiences across iPhone, Apple, watch Apple vision pro Mac and iPad. Plus, this year, we're doing something new, and we think it's going to be pretty big. We're opening up access for any app to tap directly into the on device," said Federighi.

As for LLMs, Apple Intelligence will have a new foundation model framework that gives developers direct access with privacy and offline access built in.

"We think this will ignite a whole new wave of intelligent experiences in the apps you use every day. For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," said Federighi. "And because it uses on device models this happens without cloud API costs. We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

With that Apple Intelligence tease--that may not be reality until 2026--Apple moved on to other key items including a redesign for iOS that already has Windows Vista trending due to the resemblance

A few thoughts:

  • Apple Intelligence appears to be banking on local LLMs but given the rate of innovation that could be a mistake. Developers will need to figure out how much they can differentiate with Apple’s on-device model.
  • Key details about the Apple Intelligence's approach for developers is lacking.
  • Apple is clearly betting on privacy as a pitch for AI, but it's unclear whether focusing on its edge devices will keep pace with OpenAI, Google, Anthropic and Microsoft to name a few.
  • Apple does appear to be integrating OpenAI's ChatGPT across its applications. For instance, Apple said ChatGPT image generation is now available in Image Playground.
  • It's quite possible that Apple will have to buy its way out of this AI pickle, but historically the company hasn't made big acquisitions. The valuations are stretched for foundational model players.
  • The risk here is that Apple devices are merely a vessel for other companies' AI no matter how pretty the operating systems become.

Other news items:

  • Apple's WWDC keynote was devoted to the redesign and the changes across devices looked strong. However, much of what Apple is proposing is in the latest Android today. See: Apple release on redesign.
  • Developers are getting new APIs for location, enhancements to notifications in Apple Watch, App Intents, and other goodies.
  • Visual Intelligence will pull up context across iPhone apps to give you more information, rating and other data.

 

Data to Decisions Future of Work Next-Generation Customer Experience apple Chief Information Officer

Snowflake Summit 2025: The AI-Native Data Foundation Gets Real

Last year, Snowflake made a bold pitch for an "AI Data Cloud." This year, they got down to business.

​​"We want to empower every enterprise on the planet to achieve its full potential through data and AI. And AI, and we think this moment makes it possible more than any such moment we have seen in a decade or even two decades." - Sridhar Ramaswamy, Summit 2025

CEO Sridhar Ramaswamy's message was clear: the age of AI-native data isn't coming—it's here. And with it, enterprise leaders no longer need to stitch together data lakes, BI tools, and AI pipelines themselves. Snowflake's platform is evolving to make data AI-ready from the moment it's created—and to empower business and technical users alike to act on it, securely and at scale.

While less flashy than 2024's announcements, this year's updates show Snowflake delivering the operational muscle to begin making Sridhar's vision a reality: faster performance, real integration, and tangible outcomes that will make or break AI investments.

1. AI-Native Analytics, Operationalized

Delivering decision intelligence and automation at scale, turning data into action through governed, AI-powered workflows accessible to analysts and agents alike.

What's new:

  • Cortex AISQL: AI callable in SQL—functions like AI_FILTER, AI_AGG, and AI_EXTRACT add the ability to handle unstructured text, images, and audio atop structured queries.
  • Snowflake Intelligence: Natural language agents to access structured/unstructured data.
  • Semantic Views: Define business logic/ metrics in a reusable, governed layer—so SQL users, dashboards, and AI agents all use consistent logic.
  • AI Observability: Trace AI behavior, audit responses, validate, and cite model outputs.

Why it matters:

  • BI-to-AI leap: Analysts get access to advanced models without leaving SQL. It flattens the skill gap and speeds delivery.
  • Shifts BI from dashboards to conversations with your data, giving business functions an AI assistant to explore your data in Snowflake.
  • Trust first: From semantic models to cited answers, Snowflake treats transparency and observability as part of the core Snowflake architecture.
  • Raises the question already being asked around BI/analytic investments: how can we empower analysts and decision-makers to safely leverage AI without introducing new silos or trust gaps?

2. Accelerated Data Modernization & Engineering

One of the top initiatives shared by customers and partners at the Summit was modernization — migrating and integrating diverse data sources, including structured, unstructured, and real-time data, into a single, governed platform. Snowflake listened.

What's new:

  • OpenFlow (GA): Enterprise-grade, NiFi-based integration engine—handles batch, real-time, and unstructured ingestion.
  • SnowConvert AI: Uses AI to automate code conversion, validation, and migration workflows from legacy warehouses.
  • Workspaces & dbt-native Support: A single pane for SQL, Python, Git-integrated pipelines.

Why it matters:

  • Integration plane arrives: Snowflake isn't just a warehouse—it now competes in the ETL/iPaaS layer with observability, unstructured support, and BYOC options.
  • Legacy migration simplified: Snowflake removes excuses for staying on legacy platforms, especially when performance or interactivity on the data requires data centralization.
  • Future-proofing pipelines: Snowflake is positioning itself as the one-stop shop for modernizing legacy data infrastructure and the AI workflows that come next.
  • With the number of data projects ongoing, data and AI leaders must answer the question: How do we modernize legacy systems without creating brittle pipelines or compromising migration quality?

3. Adaptive Compute: Simpler, Smarter, Cheaper

As the category matures, enterprises naturally shift to prioritize interoperability, simplification, and managed cost against unpredictable AI and analytics workloads. These considerations consistently ranked among the top 5 requests from my conversations with customers and partners during the Summit.

What's new:

  • Adaptive Compute Warehouses: Policy-based runtime management. Snowflake dynamically assigns compute—no sizing or tuning needed.
  • Gen2 Warehouses: 2.1x–4.4x faster performance with new hardware/software blends.
  • Unified Batch & Streaming Support: streaming ingestion with up to 10 gigabytes per second with 0.5 to 10 second latency without building separate pipelines or introducing external tools.
  • Multiple FinOps tools: From reviewing workload performance, monitoring spending anomalies, to setting spending limits for resources based on tags, and minimizing FinOps overhead.

Why it matters:

  • Less infrastructure babysitting: Auto-scaling and intelligent routing reduce platform overhead and FinOps headaches, ensuring performance without waste.
  • Foundation for unpredictable AI: Agentic and ML workloads are bursty and need near-real-time data. Adaptive Compute removes the configuration burden.
  • The question posed by data and AI leaders was how to make platforms more responsive to the growing set of AI and analytics workloads without increasing operational complexity. For more mature leaders, the question was how AI would drive operational efficiency to free resources for the next initiatives.

4. Interoperability & Governance: A Unified Control Plane for the AI Era

With AI agents and self-service analytics growing rapidly, a unified orchestration layer is essential to enforce trust, consistency, and visibility across users, tools, and data products.

What's new:

  • Iceberg & Polaris Interop: Full read/write support for open table formats and external catalogs.
  • Horizon Copilot & Universal Search: Natural language access to lineage, metadata, and permissions to all data unified to Snowflake.
  • AI Governance Frameworks: Native identity, MFA, and audit controls extended to agents and hybrid environments.

Why it matters:

  • Centralize to an open lakehouse to then orchestrate: As data use cases multiply—from self-service analytics to autonomous agents—data and analytic leaders need a single orchestration layer to manage access, visibility, and consistency.
  • Governance as infrastructure: Horizon makes governance user-facing and operational—so rules aren't just enforced, they're discoverable and explainable across all your mapped data assets.
  • Interoperability is execution: Supporting Iceberg, Polaris, and open catalogs ensures flexibility without fragmentation—an essential trait as AI workloads touch more business domains.

Final Take: The Battle for the Data and AI Foundation Is Now in Its Next Phase

Satya Nadella said it best: "We are in the mid-innings of a major platform shift." (NOTE: you can read my summary of MSFT Build 2025 here)

If that's the case, Snowflake just stepped up to the plate and knocked a double into deep Data and AI platform territory.

Snowflake's announcements aren't just analytics with AI features—it's the emergence of a unified execution layer where data, decisions, and intelligent agents converge to support a decision-centric architecture. The battlefield has shifted. It's not about who stores data better—it's about who activates it faster, governs it smarter, and enables AI to drive real outcomes.

With Cortex agents, Adaptive Compute, and OpenFlow integration, Snowflake is betting that the AI-native data foundation will be the next cloud OS. And in this next inning, the winners will be the ones who can orchestrate, not just visualize, enterprise intelligence.

For CDAOs, the implications are practical: Modernization no longer requires trade-offs between trust, scale, and agility. The race is no longer about who can analyze—it's about who can execute. And Snowflake's AI-native data foundation has just moved the line on execution, making it faster, safer, and more complete.

 

If You Want More on Snowflake Summit 2025

Related articles for more information:

  • Watch Holger Mueller and me break down Snowflake Summit 2025: the customer/partner buzz from the floor, the OpenFlow acquisition, and Enterprise implications. Watch the recap here: https://bit.ly/443yPvp
  • Snowflake makes its Postgres move, acquires Crunchy Data: bit.ly/4kvrzOG
Data to Decisions Tech Optimization Innovation & Product-led Growth Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Product Officer

Qualcomm acquires Alphawave for $2.4 billion

Qualcomm said it will acquire Alphawave IP Group, a UK company, for $2.4 billion in a move that will give it assets to speed up its data center expansion.

The purchase will give Qualcomm IP in high-speed connectivity and compute for its Qualcomm Oryon CPU and Hexagon NPU processors. In May, Qualcomm outlined its data center CPU efforts and said it will connect with Nvidia's NVLink Fusion initiative.

Qualcomm said it can bring low-power inferencing to data centers as well as custom processors for AI workloads.

Cristiano Amon, CEO of Qualcomm, said Alphawave will bring complementary technology for its CPUs and NPUs. "The combined teams share the goal of building advanced technology solutions and enabling next-level connected computing performance across a wide array of high growth areas, including data center infrastructure," said Amon.

The deal is expected to close in the first quarter of 2026.

Data to Decisions Chief Information Officer

Will Technology Convergence Crush or Celebrate the Contact Center?

A contact center at a crossroads is nothing new. It seems that every time there is a technological evolution, the contact center faces disruption. That is especially true in this age of artificial intelligence that is transforming experiences for customers and employees alike. AI has redefined efficiency, encouraging a streamlining of technology estates, data stores and forever shifting where and the how engagement, work and collaboration happens.

The clarity of the high-fidelity signal gathered directly by the contact center from the customer can now be harnessed, analyzed and shared across the enterprise, in large part, thanks to AI. The responses, reactions and skills demonstrated by service representatives that add to the durability of customer relationships can be captured and added to an organization’s knowledge repository. For the contact center, AI has delivered operational efficiencies long dreamed about. As quickly as AI has reinforced the strategic importance of the contact center, it has also invited questions from technology leaders looking to streamline communications technology stacks and avoid overly complicated and customized investments.

This intersection requires a decision and has elevated key questions around need, infrastructure and intention. Why are the “as a service” offerings around communications—contact center as a service (CCaaS), unified communications as a service (UCaaS), marketing automation, sales engagement, customer service management, et al.—so segmented? Why don’t organizations think of customer communications as a form of customer collaboration? Why does it take five platforms, three swivel chairs and endless patience all in the name of customer relationships? Why is this entire vision so complicated, costly and inefficient?

Pressure is mounting to not just justify the costs of doing business, but to reign in the total cost of technology ownership. As AI experimentation gives way to scaled AI implementation, there is an expectation for AI infusion into every aspect of business. Thinking of AI in the organization, let alone the contact center, as an add-on or accessory misunderstands the true power of AI. The expectation is for AI to be woven into the very fabric of work. This demands new strategies around data, workflows and infrastructure…and this demand turns into a pressure that is both top down (as C Suite leaders and Boards ask about AI progress) and bottom up (as employees wonder why AI tools are not as readily available for work as it is in their personal lives).

The unintended outcome of this surge of efficiency is a reexamination of communications stacks and structures, pushing markets once segmented by dialers, inbound or outbound actions and where calls happened to consolidate into more elegant, cloud native and fully connected systems.

The ask has become to focus on how the people who engage most directly and immediately with customers work as opposed to where they work. Instead of discussing in-office versus remote work, the contact center has recentered on people over places. This new strategy for efficiency looks at how work is done and how that work can be enhanced and decisions accelerated thanks to automation and AI. While service reps are being empowered with seamless automated support to their work, customers are being encouraged to engage at their will, in the channels of their choosing. The goal of the contact center should be to streamline the work of the service rep to intentionally take the work out of being a customer.

Advancing the Contact Center With AI

While technology convergence is inevitable, the contact center won’t be destroyed. Convergence has, however, made choosing a path forward an imperative. So where can contact center leaders start down the right path?

Rethink from the outside in. Despite its absolute and critical role in customer engagement and experience delivery, the contact center is often developed as an inside out strategy, focusing on meeting operational goals that traditionally put the business at the center and work outwards to mold the customer’s experience around those goals. This is where a legacy mindset of shorter call times, call deflection and other organization-first goals and business outcomes have won. But now, thanks to AI, these same operational efficiency goals can be achieved while putting the customer at the center of decision making and strategy.

No customer wants to spend MORE time on the phone with a service representative. Speed and efficiency in managing simple concerns and requests is just as important to the customer as it is to the service team. With generativeAI, this speed of decisions and engagements in context can happen in real time, in the customer’s context and fully attune to the customer’s journey. The partnership between service reps and their Copilots are immersive and conversational, with a capacity to deeply understand the customer, the business and the individual service rep to boost productivity while simultaneously boosting engagement and customer service.

Modern examples of this include Microsoft’s Copilot, including Microsoft’s AI-native portfolio for Service. The Copilot capabilities embed across Dynamics 365 Customer Service and Dynamics 365 Contact Center, integrating with CRM and enterprise data resources to assist service reps and streamline workflows. Rather than disrupting work because of a customer, these workflows carry the customer’s needs, expectations, history and voice into the organization and turns that into powerful assistance and agentic processes.

Turn customer obsession into a passion for value. There is often a mantra that everyone in business should be “obsessed” with the customer. But as in life, obsession can go horribly wrong as it presents as fixation or delusion. Instead, the contact center has a unique opportunity to leverage its keen understanding of the customer and context to establish workflows and automations that focus on how the organization can both proactively and reactively deliver value.

Autonomous service works best when the business outcome and customer value exchange is aligned. Call deflections hold value to the business, shifting customers into more cost-effective self-service digital experiences. But they are only a valued experience if the customer achieves their goals in a manner they expect. Architecting value-first autonomous workflows thinks from the customer and tracks their engagements back to the contact center, understanding where, how, when and why the customer is engaging. To just be obsessed with a customer could mean knowing everything about that person, but it does not guarantee the ability or capacity to act. Value delivery is rich with empathy but also compels the service rep and the business to help change the status quo of the customer.

Become the center point for market and customer knowledge. Knowing more about the customer and their definition of value shouldn’t be locked away in a contact center solution. Thanks to AI’s capacity to ingest, curate and contextualize customer conversations to better understand the meaning, intention and reality of a customer’s connection, contact centers have become exceptionally comfortable and confident in their ability to synthesize customer voice into a real-time intelligence asset.

Automating time-consuming tasks like call summaries and follow up emails is the performance driving starting point. This is the opportunity to establish an enterprise-wide strategy where AI can surface unified intelligence across the entire organization, presenting in service rep should have access to shipping and supply chain information that impacts the customer, supply chain and shipping should have access to information around potential points of customer friction and expectations. The sharing of contact center driven intelligence should be a bi-directional exchange across the modern enterprise.

Shifting the Conversation from Convergence to AI Acceleration

The future of the contact center deserves to be easier, with less heavy lifting and less wear and tear on the people brought to power experience strategies. AI has the capacity to help lift that load and deliver the efficiency and productivity the contact center has always expected and craved. But that is just the first stage of AI maturity and value! There is much more to be achieved, especially as the technology paths and platforms continue to converge. As the walls come down between internal and external communications systems, this concept of an experience of collaboration can be achieved. Shared intelligence, shared understanding and shared experience delivery doesn’t need to be trapped in an individual system or dependent on a single interface or presentation layer. Insight, intelligence and the manifestation of the customer hosted in the form of data can and should easily intersect with the knowledge an organization curates about the business and about the products being consumed.

Thanks to AI the current technology convergence can be an opportunity for simplification and not an assured fate of collapse or pressure-induced failures. Collaboration and communication can intertwine and accelerate the value all parties realize. The real beauty of AI is that thanks to its capacity to ingest, analyze and normalize complex data sources and types, it can extrapolate far beyond human capacity. So let the convergence begin! May it not kickstart an era of rip-and-replace or the stagnation and fear that some of the recent cloud migrations and infrastructure modernizations of the past revealed. Instead, let this vision of communications, collaboration, AI and the customer help make this adventure called the work of service be just a bit easier, more valuable and seamlessly connected.

Data to Decisions Future of Work Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Executive Officer Chief People Officer Chief Information Officer Chief Digital Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer