Databricks is no longer just a lakehouse. It aims to be an end-to-end decisioning platform—one who knows the meaning behind the data.
The evolution of the Databricks Summit—from Spark to Spark+AI to now, simply Data + AI—is more than a name change; it’s a mission statement. This year’s event was a declaration that the era of the standalone analytics platform is over. Databricks is making a big, multi-front play to become the single, unified platform for enterprise decision-making, aiming to own the entire intelligence lifecycle from raw data to the application interface.
For CDAOs and analytics/AI leaders, this raises a crucial question: Can your current stack evolve from storing data to operationalizing intelligence?
Below, we break down what’s new and the strategic questions every enterprise should be asking.
1. From AI/Analytics Platform to Decision Platform
Databricks is no longer content to be just the lakehouse layer. With Lakebase, Databricks One, and Unity Catalog Metrics, it has taken on systems of record and is now moving upstream—powering operational systems, governed metrics and analytics, and GenAI interfaces for business teams.
What’s New?
- Lakebase: A transactional Postgres engine built on Delta, optimized for agents and app data.
- Databricks One: No-code interface for dashboards, copilots, and decision apps.
- Unity Catalog Metrics: Certified business metrics reusable across BI tools, apps, and agents.
Why It Matters This is not just about unifying OLTP and OLAP; Databricks is establishing a robust, self-reinforcing ecosystem of unified data, contextual user interfaces, and trusted business insights.
- The lakehouse supports real-time operational workloads, agentic applications, and having changes in operational data available for analytics in real-time, and vice versa.
- Empowers business users to review and “ask questions of their data (with Genie)” and act on governed insights without coding or additional Business Intelligence (BI) tools.
- Solves “multiple versions of truth” by unifying metrics supported by increasingly automatically enriched semantics that learn from data use.
The CDAO Question: Is my current architecture built to store data, or to make and automate decisions? How much is the silo between my operational and analytical systems costing me in time, money, and misalignment?
2. Agent Bricks: Moving from Pilots to Production
Every enterprise is testing GenAI—but most are stuck in “pilot purgatory.” Agent Bricks is Databricks’s effort to industrialize agent development with evaluation, cost tuning, and grounding—all built into the platform.
What’s New?
- LLM-as-Judge: Custom evaluation frameworks for task-specific benchmarks beyond generic model leaderboards.
- Optimization layer: Tunes model selection and behavior for cost vs. quality.
- Synthetic data generation: Identifies and fills gaps in training sets using governed enterprise data.
- Grounding loop: Ensures enterprise context, human-in-the-loop review, and retraining to improve over time.
Why It Matters
- Lowered barrier to entry and de-risking AI investments by "SaaSifying" the complex process of agent creation and grounding it in observability.
- Embeds cost control, performance tuning, and compliance into the agent lifecycle.
- Move beyond building models to focusing on observability and deployment for automated “decision-makers.”
The CDAO Question: Am I still measuring my AI initiatives on model performance and accuracy, or do I have a clear framework to evaluate their direct, quantifiable impact on business outcomes and ROI?
3: Pipeline Productivity Without Compromising Governance
For all the agent and OLTP talk, the biggest applause was for a long-standing problem: pipelines. Lakeflow GA and the announcement of Lakeflow Designer promised to deliver speed and control for ingestion, transformation, and data flows across business and engineering teams.
What’s New?
- Lakeflow Designer: Drag-and-drop and GenAI-assisted pipeline builder for analysts that compiles down to Spark SQL, and engineers can edit with changes reflected in the UI.
- Unity-native governance: Pipelines output production-grade Spark code with CI/CD support
- Spark Declarative Pipelines: Formerly known as Delta Live Tables (DLT), has been open-sourced and contributed to Apache Spark as a new industry standard for defining data pipelines.
Why It Matters
- Bridges the business/engineering divide, accelerating the leverage of production-ready, version-controlled data, all without creating shadow IT.
- Eliminates brittle and unmanaged ETL by unifying batch and streaming under one governable transformation layer.
- Reduces reliance on external ETL tools, such as Fivetran, dbt, and Informatica.
The CDAO Question: How can I empower my business users to innovate faster without sacrificing lineage, testing, or engineering trust?
4. Migration & Ecosystem Consolidation
Databricks is building more than features—it’s removing barriers. Lakebridge and Zerobus reduce glue-code complexity, making switching to the platform easier than ever.
What’s New?
- Lakebridge: A free, LLM-powered migration tool from 20+ data warehouse platforms, including Teradata, Oracle, and Snowflake.
- Zerobus: Real-time ingestion into Unity without Kafka/Kinesis-style message bus overhead.
- App Framework Expansion: Retool, Gradio, and Streamlit apps deploy natively inside Databricks.
Why It Matters
- Cuts time and cost in migrating from Oracle, Teradata, and Snowflake.
- Simplifies real-time architectures by removing the need for specialized engineering teams to manage Kafka or Kinesis for a whole class of high-throughput ingestion use cases.
- Strengthens Databricks’ position as not just an analytics engine, but an app platform
Key Questions for Platform Owners: How much of the manual rework and risk can Lakebridge truly eliminate in our complex legacy migrations? Is Zerobus mature enough, and does it have the scope of functionality (e.g., pub-sub) to handle the scope of our mission-critical, real-time production workloads today?
The Wrap: What Every Data & AI Leader Should Ask Now
Databricks’ vision is clear: a single, intelligent platform where data lands, is transformed, is understood, and is acted upon by both humans and AI. While open at different layers, Databricks’ promised simplification and leverage of data centers on Databricks delivering “Data Intelligence,” with Unity Catalog at its core. This forces every CDAO, CAIO, and CIO to move beyond vendor comparisons and confront fundamental questions about their strategy.
- The Architectural Question: Do we truly need OLTP and OLAP on a single stack—or is separation still more modular and cost-effective?
- The Readiness Question: Is our organization prepared for a workforce of production AI agents? This requires a new level of maturity around evaluation, governance, and risk that goes far beyond simple chatbot pilots.
- The Platform Question: Can we consolidate our data platform without giving up best-of-breed tools or flexibility? What are the risks of lock-in?
These are some questions to begin with, pointing to a bigger challenge that is not about technology, but about leadership. The ultimate question is this: As a data leader, am I prepared to drive the organizational and operational transformation required to capitalize on a truly unified platform?
There’s a lot to unpack in the announcements from the data cloud providers and hyperscalers. Ping me to dig deeper and discuss. Share your thoughts in the comments to continue the conversation.
If you want to learn more
- Watch LinkedIn Live: Holger Meuller & Mike Ni break down the news from Databricks Data+AI Summit
- Learn: Databricks on the Core Idea Behind Data Intelligence Platforms
- Read: Larry Dignan’s breakdown of the Databricks announcements