Results

Data Lakes, AI Agents, and Enterprise Transformation | CRTV Episode 107

In the latest episode of ContellationTV, co-host analysts Holger Mueller and Liz Miller kick off by covering #enterprise tech news. Their analysis includes 1) the #agenticAI frameworks race heating up, vendors' competition on #data integration and developer velocity, and #Oracle's capex investments signaling tech transformation.

Next, Liz sits down with Pegasystems' product marketing leader Tara DeZao, for a CR #CX Convo at PegaWorld 2025. A few key takeaways from their convo include:

- Marketers learning to partner with #AI, not fear it
- AI as a collaborative tool for content creation
- Focusing on customer journey optimization
- Breaking down organizational silos through intelligent workflows

Finally, Holger interviews Miran Badzak and Edward Calvesbert from IBM about the launch of Watsonx.data, a hybrid lakehouse supporting structured and unstructured data. They share how IBM's Db2 introduces vector embedding and similarity search capabilities. Other topics include:

- AI-powered database management tools
- #Quantum computing roadmap taking shape

Watch the full episode to learn about the future of enterprise #technology! 
__

00:00 - Introduction
00:45 - Enterprise Tech News
17:17 - CX Convo with Tara DeZao, Pegasystems
28:42 - Interview with Miran Badzak and Edward Calvesbert

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/MbigkpWoPzI?si=AyZUx_yF8m8hqQD2" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

OpenAI vs. Microsoft: Why a breakup could be good

OpenAI and Microsoft are going to be rivals as the companies increasingly appear to be tilting away from the frenemy and partner model that has paid off so well for both companies.

A year ago, we riffed on the growing debate about whether OpenAI and Microsoft were symbiotic or becoming frenemies. Based recent news, it appears the two companies may be blowing right past frenemies to become rivals.

Here's a recap of what has transpired in recent days:

  • The Financial Times reported that Microsoft was prepared to walk away from OpenAI talks if they can’t agree on critical issues. Microsoft has access to OpenAI’s technology until 2030.
  • The Wall Street Journal reported that OpenAI and Microsoft tensions are boiling over. OpenAI wants to lesson Microsoft's distribution power over its AI portfolio and get buy-in on a plan to convert from a non-profit and go public.
  • OpenAI CEO Sam Altman is chasing superintelligence--much to Mark Zuckerberg and Meta's chagrin--and needs more compute. Reuters reported that OpenAI was even in talks with Google Cloud for capacity. OpenAI already leverages Oracle Cloud Infrastructure and Microsoft Azure, which used to exclusively provide infrastructure to the LLM giant.
  • OpenAI's enterprise business is surging as companies buy direct for LLMs and AI agents, said Altman at Snowflake Summit 2025. OpenAI recently landed a deal with Mattel and has launched OpenAI for Government. Those two efforts were just the latest in a long line of enterprise deals with Lowe's, Booking.com and Wayfair.
  • The body language between a video interview between Microsoft CEO Satya Nadella and Altman at Build was uncomfortable. For its part, Microsoft has been developing its own models to lessen its dependence on OpenAI. It wouldn't be the least bit surprising to see Copilot get a model transplant.

Individually, these headlines don't necessarily mean that OpenAI and Microsoft are veering to a messy divorce. And even if the breakup is messy both companies have raked in dough and will pocket billions of dollars. The two companies are the best technology partnership ever.

A few thoughts:

  • In the long run, Microsoft diversifying its models available on its platform is critical. Microsoft Azure AI Foundry has more than 1,900 models, but is still associated with OpenAI.
  • From the OpenAI perspective, a glidepath away from Microsoft makes sense. The companies will compete for enterprises, agentic AI dominance and industry services. And lesser businesses such as search will feature OpenAI vs. Microsoft too.
  • Enterprise buyers will benefit from a breakup too. I use both OpenAI ChatGPT and Microsoft Copilot (as well as Grok, Google Gemini and Anthropic Claude) and the Microsoft-OpenAI partnership remind me of Samsung and Android. The former puts layers on top of the original and gums up the experience.

Holger Mueller, an analyst at Constellation Research, said:

"Nothing lasts forever, and that applies as well to the special relationship between OpenAI and Microsoft. Apparently, Microsoft doesn't want to spend the capital to run OpenAI exclusively. Sam Altman has been looking for alternative sources to pay for the capacity needed for open AI to run its ever more hungry models. And it looks like Oracle is going to get a chunk of that business. In the meantime, Microsoft is betting on its new in-house chip  architecture. Time will tell if this was a premature breakup or not."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity openai Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

DataBricks Data & AI Summit Key Takeaways

Constellation Analysts Holger Mueller and Michael Ni share insights, predictions, and more from Databricks' recent Data & AI Summit.

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Product Officer On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/YFv0JwUnyoo?si=ropMhMHZI450Q5Uu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AWS re:Inforce 2025: Takeaways from the Amazon, AWS CISOs

Amazon Web Services is using its Nova models for tailored use cases including cybersecurity. Other takeaways from a chat with Amazon and AWS chief information security offers included the combination of physical and cybersecurity and how humans and AI code differently.

AWS recently launched Amazon Nova Premier, its most capable LLM. AWS launched Nova models last year and has been courting developers.

Eric Brandwine, VP and Distinguished Engineer at Amazon, said:

"We are very proud of the work that we've done with Nova, and we are absolutely using it internally. One of the things that we can do because we have this AI organization, is fine tune the model for different use cases, and so we've been able to come up with Nova variants that are tuned to specific security workloads, and that has shown significant dividends."

At AWS re:Inforce 2025 during an analyst Q&A, Brandwine was speaking on a panel with AWS CISO Amy Herzog and Amazon CISO CJ Moses. At Amazon, security chiefs rotate among units. For instance, Moses and Herzog swapped roles.

Herzog said Nova is an example of Amazon building tools and building blocks. Models are no different. "Choice is so deeply ingrained that it might not be top of mind to talk about one versus the other," said Herzog. "You have a job and there are a bunch of different models that you could choose from for that job. You pick the best one."

Other topics:

Physical security. Moses said physical security falls under him as CISO. "We did it for the reasons of making sure that we have the best visibility across all of those areas," said Moses. "A piece of information about a workplace incident will become the information that we need to stack on to other things to determine where we have a scrambled employee that potentially could become an insider."

Areas of non-obvious data connections that may prove out include cybersecurity and freight intelligence from an incident in a building. "We actually use the data, because the worst thing you can do is have intelligence and not actually act on it," said Moses. "And the whole idea for us is to make sure we're not siloed with that data, and secondarily, that we're able to act on it."

AWS is AWS' largest security customer. Security is required just to run a cloud. "The amount that you invest in security to secure an online retailer is very different from what you invest to secure a cloud. And so we've got all of these smart, clever people. They're operating with different constraints. They have different creative ideas, and we get to go reap them all and apply them across the company," said Moses.

Security's different lens. Herzog said security is a prerequisite and you can't get carried away with new technologies that may hurt your cybersecurity posture.

Herzog said:

"If developer productivity goes up by this amount and we need to keep pace with it, what does that without lowering the security bar? What does that look like? What ideas do you have? Recognize the changes that are happening, but then really keep the outcomes that we want to achieve--protecting our customers at speed and at scale."

Solving problems never ends. Brandwine said that internally AWS talks about the security ratchet. "It always gets tighter," he said. "It's a travesty to spend time solving a problem we've already solved before, or relearning an old lesson. So we have this deep investment in automation, automated reasoning, in using existing techniques and new techniques. We reason about our services. We say this will always be true, and then we make the machine make that always true, so we can spend our time on the new things. When you solve a problem, you're not free. You just go work on the next problem."

Why AWS and Amazon don't talk about security more in public. "If you're bringing things up to customers that they can't act on or do anything about directly themselves, you're essentially fear mongering," said Moses. "We don't believe on unnecessarily worrying our customers, especially when those things are things that are within our control. The industry itself does a good enough job on (fearmongering) that we don't need to add to the flames. We'd rather be the ones that are putting the flames out."

AI security is just security. Herzog said, "you can't just separate genAI from the rest of the conversation." "The playbook is the same as always. What are you trying to accomplish?," said Herzog. "There are definitely technical challenges that we are starting to get ahead where we might be in a few years. But I think that's a different conversation."

Brandwine said:

"There are absolutely interesting novel attacks against LLMs, and some of these have been applied to commercially deployed services. But the vast majority of LLM problems that have been reported are just traditional security problems with LLM products. You've got to get the fundamentals right. You've got to pay attention to traditional deterministic security."

Secure code and AI vs. human. Brandwine said Amazon has multiple checks on AI-generated code. One thing to watch is AI and humans write code differently. "We're getting significant success internally, but what we're finding is that the way that the human would write the code is not necessarily the way that the model would write the code. And if you want the model to evolve the code, you might want to structure it a bit differently," said Brandwine.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

AWS re:Inforce 2025: How customers are using AWS security building blocks

For Amazon Web Services customers, the continuum between security services and the rest of the cloud vendor's portfolio covering storage, data, AI and compute tends to blend together.

The primary takeaway from re:Inforce 2025 is that AWS isn't trying to make money in security. Yes, AWS has security products like GuardDuty, Inspector and SecurityHub sold separately, but a lot of AWS security additions are built into existing services.

What's clear is that AWS has a lot of visibility into its platform data and can handle emerging threats with AI. Whether security innovation turns into a separate product or handy feature in an existing service, say Amazon Bedrock, falls into the to-be-determined category.

These customer vignettes from AWS re:Inforce highlight the cloud provider's security strategy that is more about providing building blocks where security is built-in, but not necessarily the main show.

Comcast: Three North Stars

Noopur Davis, Comcast's Global CISO, Chief Product Privacy Officer, said the company has been an AWS customer for six years. Comcast has adopted AWS frameworks and used the cloud provider's building blocks to enable its developers build in cybersecurity throughout the software development lifecycle.

"We take a long-term view of security. Our first North Star is privacy, followed by data security and operate on zero trust," said Davis, who oversees a team of 2,000 cybersecurity pros.

Davis added that Comcast began integrating generative AI in April 2023 and had to develop frameworks for models, the data feeding models and guardrails.

Comcast has developed an AI workbench that will build in security across the software development lifecycle. According to Davis, security and data practices are comingled and part of a broader transformation effort.

Davis outlined the following security projects:

  • The company is using AI and its data for threat modeling, pen testing and code remediation.
  • Comcast is integrating security into its development workbenches.
  • AI bots are used for governance, regulatory and compliance processes.
  • Comcast has developed on AI-enabled risk engine across its platform.

For Comcast, using AWS for data pipelines and modeling has improved the company's ability to use AI for cybersecurity.

Comcast is a partner and as well as a customer. At re:Invent 2024, the cable giant said it was shifting its 5G wireless core services to AWS. In 2023, Comcast took its internally developed cybersecurity data fabric commercial with AWS and Snowflake via a product called DataBee.

DataBee, which uses AWS compute and storage, was the first product from Comcast Technology Solutions new cybersecurity division formed in 2022.

CarGurus: Data protection, data lineage, AI governance

Kelly Haydu, VP Information Security, Technology & Enterprise Applications at CarGurus, said the auto marketplace is powered by data. "Data is the fabric of a company," said Haydu, who added that securing that data requires understanding data flows incoming and outgoing.

CarGurus was an early adopter of generative AI and began incorporating the technology two-and-a-half years ago. As a result, CarGurus had to develop AI risk management and governance strategies early.

In 2022, CarGurus moved to AWS as the preferred cloud provider. To secure data, CarGurus relies on Cyera, a startup focused on data protection. CarGurus has also created a bevy of security frameworks including some that evaluate emerging technologies as markets change.

Haydu said:

"We have a great handle on our data today, but as AI starts to come on more and more I need to know when the piece of technology is introduced. When the machines are talking to the machines and the agents are talking to the agents, we needed to have an automated way to understand the data sources from inception to at rest in our databases, and that's called data lineage. Understanding the life cycle of that data will continue to be important as more data comes into our ecosystem in the upcoming years."

Going forward, CarGurus will continue to evolve as the company expects to ingest more data and leverage AI and automation as it scales. Haydu said data security has four evolving categories:

  • AI and automation. "What we're going to see in the future is more dynamic, real-time analysis of the data coming in from platforms," said Haydu. "It's going to be about contextual analysis with machines versus having a scan that has to happen."
  • Integration across data fabrics.
  • Risk management. Haydu said risk management is going to focus on individuals and their access, environments with data sensitivity.
  • Zero trust architecture, which will be adopted into platforms.

Santander: Addressing risk, security frameworks, regulators

Jamie Nash, Head of Technology and Operations Risk at Santander, implemented a digital banking platform called Open Bank in 18 months with AWS and Deloitte.

She said it's critical to have a security framework that can be shared with regulators. Nash added that data sovereignty remains a big concerns for financial institutions.

Nash said Santander used AWS Risk and Compliance Navigator Framework as the primary blueprint. Santander had an existing risk framework that was adapted for cloud computing with the help of AWS and Deloitte. The move took traditional banking security controls to cloud equivalents.

Santander addressed data sovereignty by doing the following:

  • Keeping all customer data within the US.
  • Architecting Open Bank so it could maintain compliance.
  • Keeping existing customer reference data on-prem instead of moving to the cloud.
  • Open Bank was focused on new customers to avoid data migration issues.
  • As for regulators, Nash said Santander was in early communication about open bank to speed up approvals.

"The biggest concern was the regulatory component and it was the most daunting aspect, because we are obviously very highly regulated," said Nash. "We don't see a lot of banks moving their full tech stack in the cloud in one fell swoop. We had to explain and educate our regulators."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Salesforce raises prices across multiple products including Slack

Salesforce has raised prices across multiple products as it continues to hone pricing for AI and Agentforce.

In a post, Salesforce outlined the moving pricing parts. The latest changes come after the recent launch of Flex Credits, a consumption-based model, along with flexible Agentforce pricing.

According to Salesforce, new Agentforce add-ons and Agentforce 1 Editions are generally available for Sales Cloud, Service Cloud, Field Service and Industries Cloud. Those plans replace Einstein add-ons and Einstein 1 Editions.

In addition, Salesforce said Enterprise and Unlimited Edition prices are going up Aug. 1 and Slack plans are seeing price increases.

For Slack, Salesforce is including new AI features for all paid plans. Salesforce channels will be added to all Slack plans. The upshot here is that Salesforce sees Slack as a user interface across its platform.

As for Salesforce customers, there will be a bit of number crunching as license go up and the company works in consumption-based models.

Here's a look at some of the moving pricing parts:

  • New Agentforce add-ons will start at $125 per user per month. These add-on plans include pre-built Agentforce templates by role and industry, unlimited use of genAI and Agentforce for employees for licensed users, access to integrated AI capabilities, and analytics via Tableau Next.
  • Agentforce 1 Editions start at $550 per user per month. These plans include what's in Agentforce add-ons, 1 million Flex Credits per org per year, the ability to swap licenses with Flex Credits and Data Cloud with 2.5 million Data Services Credits per org and new Slack Enterprise +.
  • Enterprise Editions and Unlimited Editions for Sales Cloud, Service Cloud, ​Field Service, and select Industries Clouds will go up 6% on Aug. 1. Salesforce Foundations, Starter, or Pro Editions pricing is unchanged.
  • Salesforce said Slack's Business+ plan is now $15 per user per month, up from $12.50 per user per month, and adding a new Enterprise+ plan, which includes enterprise search. If you're a Salesforce Business+ subscriber on a monthly billing cycle the new price is $18 per user per month.
  • Salesforce is also rolling out Slack across Salesforce channels and various records. Slack Pro pricing will stay the same. The last time Slack customers saw a price increase was 2022.
Data to Decisions Future of Work Marketing Transformation Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS re:Inforce 2025: SecurityHub, AI and proactive defense

AWS Chief Information Security Officer Amy Herzog said cybersecurity is becoming more about the time to act than time to detect due to the sheer scale of incidents and alerts. Herzog pitched a more active defense approach amid emerging AI use cases.

Speaking at AWS re:Inforce 2025 in Philadelphia, Herzog's message echoed what was heard previously from AWS about security by design. The twist is that AWS is rolling up its various security services into one package that can automate security processes and various chores that eat up response time.

AWS launched Security Hub to give enterprises the ability to be more proactive about security as AI-driven attacks emerge.

AWS Security Hub provides a unified cloud security system that combines threat detection, signals and simplified and prioritized alerts. "Security Hub combines signals from across AWS security services and then transforms them into actionable insights, helping you respond at scale," said Herzog.

In a demo, Herzog walked through how Security Hub can aggregate everything across AWS' various security tools. Security Hub is also designed to alleviate alert fatigue. "It's about time to act more than the time to detect through automated correlation, rich context and actionable insights," said Herzog.

Herzog said that companies with more mature security and compliance frameworks are better suited to adopt generative AI and ultimately agentic AI. One key message: Security is an enabler not a blocker to AI innovation. 

AWS also touted how it is using generative AI to perform code patches and automate processes. The upshot here is that AWS is providing multiple security services even as it uses its AWS Marketplace to connect customers to a big ecosystem of vendors. What AWS is doing now is connecting those various security building blocks and automating various security workflows.

The cloud giant also launched a proactive network security analysis tools for AWS Shield and Amazon GuardDuty's expansion into container-based environments.

Here's a look at what was announced:

Identity and Access Management (IAM): AWS IAM Access Analyzer internal access findings is available to find out who has access to S3 buckets and other services.

"You can use the internal access guidance to see exactly who in your company has access to specific resources and information from one dashboard. You can monitor both internal and external access in one view."

AWS IAM, which handles 1.2 billion API calls per second, is also adding long-term credential management, data protection and access and control. The general idea is that there are no long-term credentials.

Multifactor authentication is also 100% enforced across the AWS security layer.

AWS Certificate Manager with exportable public certificates. Herzog said "we know that management of digital certificates is a challenge" so the company is now enabling certificates to run inside AWS as well as outside.

AWS Shield Network Security Director in preview: "Network Security Director starts by performing an analysis of your network, building anthropology based on the resources connections, networking services, which has been implemented, it then assesses the network security of your resources and whether they meet the which network security best practices," said Herzog.

Herzog said the idea is that AWS Shield Network Director is aimed at giving enterprises a security team built in with a simplified experience.

AWS Network Firewall Active Threat Defense: Herzog said the cloud provider is aggressive with its defenses. She walked through AWS systems to defend against emerging threats. One service behind the active threat defenses, called Blackfoot, constantly checks packets for bad actors.

"Blackfoot gives us the data plane to stop their activities using Blackfoot, and we've implemented custom packet processing," said Herzog, who said Blackfoot has stopped 2.4 trillion malicious requests over the last six months.

Amazon GuardDuty: Herzog said Amazon GuardDuty is getting enhanced features to find anomaly behaviors, sequences and signals. AWS inspects 360 trillion telemetry events per day. Amazon GuardDuty identified 13,000 high confidence attack sequences over the last 90 days.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Databricks Summit 2025: The Lakehouse Becomes a Decision Platform

Databricks is no longer just a lakehouse. It aims to be an end-to-end decisioning platform—one who knows the meaning behind the data.

The evolution of the Databricks Summit—from Spark to Spark+AI to now, simply Data + AI—is more than a name change; it’s a mission statement. This year’s event was a declaration that the era of the standalone analytics platform is over. Databricks is making a big, multi-front play to become the single, unified platform for enterprise decision-making, aiming to own the entire intelligence lifecycle from raw data to the application interface.

For CDAOs and analytics/AI leaders, this raises a crucial question: Can your current stack evolve from storing data to operationalizing intelligence?

Below, we break down what’s new and the strategic questions every enterprise should be asking.

1. From AI/Analytics Platform to Decision Platform

Databricks is no longer content to be just the lakehouse layer. With Lakebase, Databricks One, and Unity Catalog Metrics, it has taken on systems of record and is now moving upstream—powering operational systems, governed metrics and analytics, and GenAI interfaces for business teams.

What’s New?

  • Lakebase: A transactional Postgres engine built on Delta, optimized for agents and app data.
  • Databricks One: No-code interface for dashboards, copilots, and decision apps.
  • Unity Catalog Metrics: Certified business metrics reusable across BI tools, apps, and agents.

Why It Matters This is not just about unifying OLTP and OLAP; Databricks is establishing a robust, self-reinforcing ecosystem of unified data, contextual user interfaces, and trusted business insights.

  • The lakehouse supports real-time operational workloads, agentic applications, and having changes in operational data available for analytics in real-time, and vice versa.
  • Empowers business users to review and “ask questions of their data (with Genie)” and act on governed insights without coding or additional Business Intelligence (BI) tools.
  • Solves “multiple versions of truth” by unifying metrics supported by increasingly automatically enriched semantics that learn from data use.

The CDAO Question: Is my current architecture built to store data, or to make and automate decisions? How much is the silo between my operational and analytical systems costing me in time, money, and misalignment?

2. Agent Bricks: Moving from Pilots to Production

Every enterprise is testing GenAI—but most are stuck in “pilot purgatory.” Agent Bricks is Databricks’s effort to industrialize agent development with evaluation, cost tuning, and grounding—all built into the platform.

What’s New?

  • LLM-as-Judge: Custom evaluation frameworks for task-specific benchmarks beyond generic model leaderboards.
  • Optimization layer: Tunes model selection and behavior for cost vs. quality.
  • Synthetic data generation: Identifies and fills gaps in training sets using governed enterprise data.
  • Grounding loop: Ensures enterprise context, human-in-the-loop review, and retraining to improve over time.

Why It Matters

  • Lowered barrier to entry and de-risking AI investments by "SaaSifying" the complex process of agent creation and grounding it in observability.
  • Embeds cost control, performance tuning, and compliance into the agent lifecycle.
  • Move beyond building models to focusing on observability and deployment for automated “decision-makers.”

The CDAO Question: Am I still measuring my AI initiatives on model performance and accuracy, or do I have a clear framework to evaluate their direct, quantifiable impact on business outcomes and ROI?

3: Pipeline Productivity Without Compromising Governance

For all the agent and OLTP talk, the biggest applause was for a long-standing problem: pipelines. Lakeflow GA and the announcement of Lakeflow Designer promised to deliver speed and control for ingestion, transformation, and data flows across business and engineering teams.

What’s New?

  • Lakeflow Designer: Drag-and-drop and GenAI-assisted pipeline builder for analysts that compiles down to Spark SQL, and engineers can edit with changes reflected in the UI.
  • Unity-native governance: Pipelines output production-grade Spark code with CI/CD support
  • Spark Declarative Pipelines: Formerly known as Delta Live Tables (DLT), has been open-sourced and contributed to Apache Spark as a new industry standard for defining data pipelines.

Why It Matters

  • Bridges the business/engineering divide, accelerating the leverage of production-ready, version-controlled data, all without creating shadow IT.
  • Eliminates brittle and unmanaged ETL by unifying batch and streaming under one governable transformation layer.
  • Reduces reliance on external ETL tools, such as Fivetran, dbt, and Informatica.

The CDAO Question: How can I empower my business users to innovate faster without sacrificing lineage, testing, or engineering trust?

4. Migration & Ecosystem Consolidation

Databricks is building more than features—it’s removing barriers. Lakebridge and Zerobus reduce glue-code complexity, making switching to the platform easier than ever.

What’s New?

  • Lakebridge: A free, LLM-powered migration tool from 20+ data warehouse platforms, including Teradata, Oracle, and Snowflake.
  • Zerobus: Real-time ingestion into Unity without Kafka/Kinesis-style message bus overhead.
  • App Framework Expansion: Retool, Gradio, and Streamlit apps deploy natively inside Databricks.

Why It Matters

  • Cuts time and cost in migrating from Oracle, Teradata, and Snowflake.
  • Simplifies real-time architectures by removing the need for specialized engineering teams to manage Kafka or Kinesis for a whole class of high-throughput ingestion use cases.
  • Strengthens Databricks’ position as not just an analytics engine, but an app platform

Key Questions for Platform Owners: How much of the manual rework and risk can Lakebridge truly eliminate in our complex legacy migrations? Is Zerobus mature enough, and does it have the scope of functionality (e.g., pub-sub) to handle the scope of our mission-critical, real-time production workloads today?

The Wrap: What Every Data & AI Leader Should Ask Now

Databricks’ vision is clear: a single, intelligent platform where data lands, is transformed, is understood, and is acted upon by both humans and AI. While open at different layers, Databricks’ promised simplification and leverage of data centers on Databricks delivering “Data Intelligence,” with Unity Catalog at its core. This forces every CDAO, CAIO, and CIO to move beyond vendor comparisons and confront fundamental questions about their strategy.

  • The Architectural Question: Do we truly need OLTP and OLAP on a single stack—or is separation still more modular and cost-effective?
  • The Readiness Question: Is our organization prepared for a workforce of production AI agents? This requires a new level of maturity around evaluation, governance, and risk that goes far beyond simple chatbot pilots.
  • The Platform Question: Can we consolidate our data platform without giving up best-of-breed tools or flexibility? What are the risks of lock-in?

These are some questions to begin with, pointing to a bigger challenge that is not about technology, but about leadership. The ultimate question is this: As a data leader, am I prepared to drive the organizational and operational transformation required to capitalize on a truly unified platform?

There’s a lot to unpack in the announcements from the data cloud providers and hyperscalers.  Ping me to dig deeper and discuss. Share your thoughts in the comments to continue the conversation.

If you want to learn more
  • Watch LinkedIn Live: Holger Meuller & Mike Ni break down the news from Databricks Data+AI Summit
  • Learn: Databricks on the Core Idea Behind Data Intelligence Platforms
  • Read: Larry Dignan’s breakdown of the Databricks announcements

 

Data to Decisions Innovation & Product-led Growth New C-Suite Tech Optimization Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer

Adobe launches LLM Optimizer, GenStudio and Firefly updates

Adobe launched LLM Optimizer, an application that aims to enable brands to optimize content and messaging as consumers move from traditional Google search to genAI and LLM-generated summaries.

Adobe's move, announced at Cannes Lions festival, is designed to address generative engine optimization (GEO). Adobe said that GEO is designed to be more than an evolution of SEO but a new approach to digital marketing. Going forward, brands will need to be seen, cited and chosen by large language models.

LLM Optimizer focuses on three core areas:

  • Presence in AI search. LLM Optimizer helps brands ensure content is visible, accurate and influential in AI generated responses. LLM Optimizer tracks how brands show up for specific prompts compared to competitors. These prompts are replacements for keywords.
  • Traffic conversion. LLM Optimizer measures two types of traffic such as LLM Crawler traffic, when AI systems ping sites for information, and referral traffic (users clicking through from AI responses). The tool analyzes how traffic converts to engagement and revenue.
  • Content optimization. LLM Optimizer provides recommendations to improve AI ranking and presence. These optimizations can be deployed with one click via Adobe Experience Manager sites. LLM Optimizer also provides tips to optimized content beyond brand websites to third-party sources including Reddit and Wikipedia.

Adobe LLM Optimizer will fit into existing SEO workflows and support Agent-2-Agent and Model Context Protocol.

The new LLM Optimizer application was one part of a broader Cannes Lions rollout. Here's a look:

Adobe showcased GenStudio for Performance Marketing, an AI-first application that enables marketers to create on-brand content connected to campaigns with guardrails. Adobe is positioning GenStudio as the foundation for all stages of the content supply chain with Firefly models underneath.

GenStudio for Performance Marketing's Cannes release includes limited releases for video ads, non-English generation and announced support for Amazon Ads. Marketo Engage, Adobe Journey Optimizer B2B Edition, LinkedIn integration, Meta Video, Workfront Proof and third party digital asset management are all generally available.

Firefly Services, which will get updates to automate production for asset variations at scale across audiences, channels and regions.

Key Firefly items include:

  • Generate Video APIs are generally available as are APIs for text to avatar, and substance 3D.
  • Firefly creative actions to resize images and reframe video are in beta.
  • Custom models have been added.

Adobe Express updates for advertisers with enterprises features for scale, governance and efficiency. Features include Workfront integration, streamlined review and approval processes, customized home experiences and AI tools to set up brands with one click.

 

Data to Decisions Marketing Transformation Matrix Commerce Next-Generation Customer Experience adobe Chief Information Officer Chief Marketing Officer

Anthropic's multi-agent system overview a must read for CIOs

Anthropic outlined how it has built multi-agent systems for Claude Research and CIOs need to read and heed the practical advice and challenges when thinking through AI agents.

In a post, Anthropic's engineering team laid out how the company built a multi-agent system and there's a lot of practical advice on architecture, orchestrating multiple large language models (LLMs) so they can collaborate, and challenges with reliability and evaluation.

Anthropic created a lead agent as well as subagents. What was most interesting about Anthropic's research were the challenges. Your vendor is likely to tell you that there's an easy button for agentic AI, but Anthropic's post gives you some questions to ask about the architecture behind the marketing.

Here's what CIOs should note:

Multi-agent systems can deliver accurate answers, but can also burn tokens quickly. Agents use 4x more tokens than chat interactions and multi-agent systems use about 15x more tokens than chats. "For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance," said Anthropic.

Takeaway: If you use multi-agent systems for tasks where a more simple approach may be justified you're going to get hit with a big compute bill.

Agents can continue even when they get results that are sufficient.

Takeaway: Anthropic said you'll need to "think like your agents and develop a mental model of the agent to improve prompting.

Agents can "duplicate work, leave gaps, or fail to find necessary information" if they don't have detailed task descriptions.

Takeaway: Lead agents need to give detailed instructions to subagents.

Agents struggle with judging the appropriate effort for different tasks.

Takeaway: You'll have to embed scaling rules in the prompts for tasks. The lead agent should have guidelines to allocatee resources for everything from simple queries to complex tasks.

Agents need to use the right tool to be efficient and interfaces between agents and tools are critical. Anthropic used the following example: "An agent searching the web for context that only exists in Slack is doomed from the start."

Takeaway: Without tool descriptions agents can go down the wrong path. Each tool needs a purpose and clear description. Anthropic said also let agents improve prompts by diagnosing failures and suggesting improvements.

Thinking is a process. Anthropic outlined how it extended thinking mode in Claude with multiple process improvements to get from the lead agent thinking to sub agent assignments.

Takeaway: While Anthropic was focused on parallel tooling and creating subagents that can adapt, the biggest lesson here is to think through and understand the process behind how agents do the work.

Minor changes in agentic systems can cascade into large behavioral changes and debugging is difficult (and needs to happen on the fly).

Takeaway: Anthropic created a system that goes beyond standard observability to monitor agent decision patterns and interactions without tracking individual conversations. Observability tools will be critical to any agent platform.

Deployments are going to be difficult with multi-agent systems.

Takeaway: Anthropic said it doesn't update agents at the same time because it doesn't want to disrupt operations. How many enterprise outages will we have because bad code brought down autonomous agent operations?

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer