Results

A tour of enterprise tech inflection points

Technology executives are tossing around the term inflection point a good bit when it comes to agentic AI, quantum computing and any other not-quite-ready for primetime technology.

With that in mind here's a tour of tech inflection points to watch. The issue with inflection points is that they don't have time frames. Where relevant I dropped in a time frame on my believability scores.

Quantum computing accelerates

In a week where IBM outlined its quantum computing roadmap to a fault-tolerant quantum system by 2029 and IonQ bought Oxford Ionics for more than $1 billion, Nvidia CEO Jensen Huang said the technology is at an inflection point.

"Quantum computing is reaching an inflection point. We've been working with quantum computing companies all over the world in several different ways, but here in Europe, there's a large community," said Huang. "It is clear now where within reach of being able to apply quantum computing and classical computing in areas that can solve some interesting problems in the coming years."

That’s quite a walk back from comments made in January, but ok.

Huang said every next-generation supercomputer will have a quantum processor connected to GPUs. Nvidia has made its libraries available to quantum systems.

Both IonQ and IBM have big plans to scale quantum computers and network them together.

IBM CEO Arvind Krishna said the company is leaning into its R&D to scale out quantum computing for multiple use cases including drug development, materials discovery, chemistry, and optimization.

At Constellation Research we have a watercooler thread and the debate about quantum heated up about this quantum inflection point. In one corner was Holger Mueller, who has argued it's the year of quantum computing (for the last three years). Mueller said CxOs need to think through quantum computing as part of long-term planning.

Estaban Kolsky, an analyst at Constellation Research and our chief distiller, said there are more real-world technologies to figure out and quantum is a lot of hype.

  1. Mueller vs. Kolsky will be a fun quantum great debate. My take is that there will be a quantum inflection point and it’s closer than you think. Predicting the time frame is another matter entirely.

My inflection point believability on scale of 1 to 10 with a three-year time horizon: 7.

Data platforms and AI converge

The takeaways from Snowflake Summit, Databricks Data + AI Summit and Salesforce's Informatica acquisition are that data platforms and AI are going to converge if agents are going to get work done.

If you've been watching broad AI agent efforts from the likes of AWS, Microsoft Azure and Google Cloud all of them are tethered to data stores and data fabrics.

Given that backdrop, it's no surprise that Snowflake and Databricks are leaning the same way. Databricks appears to be more aggressive.

Constellation Research analyst Michael Ni said: "We’re entering a new era where data clouds and hyperscalers are racing to establish themselves as the dominant platform for AI-driven decision-making in their respective markets. The competition is no longer about warehouse performance—it’s about who owns the semantic layer, who governs the agent lifecycle, and who enables the next-gen data app ecosystem. With Lakebase, Agent Bricks, and Unity Catalog metrics, Databricks is asserting that ownership more broadly than ever before."

It's worth noting that JPMorgan Chase CEO Jamie Dimon said that data is still way harder than delivering on AI. It stands to reason that we're at an inflection point where agentic AI is really an extension of the data platform.

My inflection point believability on scale of 1 to 10: 9.

Agentic AI

"We're just starting to use agents," Dimon at the Databricks conference. See: JPMorgan Chase's Dimon on AI, data, cybersecurity and managing tech shifts

If JPMorgan Chase, which has a $18 billion technology budget, is just starting with AI agents where do you think the rest of the enterprises sit?

To be sure, we have multiple inflection points with agentic AI. Consider:

  • We're at an inflection point of vendor marketing about AI agents.
  • We're at an inflection point for AI agent standards and solving issues where these automated workers can share data and collaborate.
  • We're at an inflection point where enterprises are going to swallow consumption models from SaaS vendors.
  • And we're at an inflection point where every board wants an AI agent strategy to automate work.

Anthropic CEO Dario Amodei laid out the agentic AI dream. Speaking during Databricks Data + AI Summit, he said humans will go from conversing and collaborating with AI agents to developing fleets.

"An agent fleet is where a number of agents do things for you, and you are essentially the manager of the agents. It'll go from agent fleets to agent swarms, just when each agent in the fleet itself employs something. And so the human engineer is sitting the top hierarchy, like they're managing an organization or a company, and they still need to intervene. They still need to set direction," said Amodei.

Mainstream adoption? Not yet, but think 2026 for some scale. Biggest issue now is getting these agents to work well and everything we’re hearing is there’s a lot to do before going production. We've covered this topic plenty so let's move on.

My inflection point believability on scale of 1 to 10: 7.

Superintelligence

If you read OpenAI CEO Sam Altman's missive this week, you have a good feel for the vision even though it's a little murky.

We're at an inflection point for bold statements about AI.

"In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else," said Altman.

This superintelligence thing will be great for humanity—or not. "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before," said Altman. "We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big."

All we have to do is get alignment on what society wants from AI systems and democratize the access.

My inflection point believability on scale of 1 to 10: 2. Why? Society is in no place to reach consensus on something like AI superintelligence. Silicon Valley will deliver superintelligence and apologize later.

Apple is missing the AI revolution and peak Apple has passed

Given the ho-hum reaction to Apple's WWDC 2025 and lack of Apple Intelligence progress, it appears that Apple is treading water before a downswing. Like many large tech vendors, Apple appears to be missing the AI curve.

Craig Federighi, SVP of Software Engineering at Apple, said during the WWDC keynote that "we're continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year."

And with that confession at the altar of AI, Federighi and his band of executives spent the rest of the keynote talking about the redesign of its operating systems across devices called Liquid Glass.

It remains to be seen whether Apple can catch up in AI and become more than a mere vessel for other innovators. Ben Thompson at Stratechery said Apple retreated into the familiar. Others complained that Apple just recreated Windows Vista. VisionOS 26 was interesting though.

Either way, Apple's WWDC and product cycles have lost the buzz, but I'd hold off on the obit. One thing is clear: Apple has the resources to miss the AI curve to some degree and milk services revenue until the company figures it out. In its latest quarter, Apple generated $24 million in operating cash flow and returned $29 billion to shareholders and had more than $28 billion in cash.

My inflection point believability on scale of 1 to 10: 9. The problem for Apple is the curve is headed in the wrong direction. We've passed peak Apple.

Data to Decisions Chief Information Officer

The problem with Meta chasing superintelligence

Meta has hired Scale CEO Alexandr Wang to oversee its AI efforts as it pursues superintelligence. With the worst kept secret in Silicon Valley out of the way, it's time to ponder one massive, nagging question: Can an effort that may depend on social media data really be superintelligent?

One of the tried and true axioms is "garbage in, garbage out," or GIGO. Any system that has low quality input is going to give you garbage. Let's face it there is nothing "super" about social media and spare me on the "intelligent" argument.

So now, Mark Zuckerberg and Meta are revamping the AI strategy via the Scale AI investment and reportedly building an AI dream team. Wang reportedly is just the start of this AI supergroup. Meta has a bit of envy about big statements from the likes of OpenAI's Sam Altman and Anthropic's Dario Amodei.

And just in case you think GIGO is so yesterday just check out TechCrunch's tale on the Meta AI app, which just might be a "viral mess."

Meta's reported $14.3 billion investment in Scale AI, which is now valued at $29 billion, is one expensive acquihire. Maybe this bold move by Meta works, but I'm still stuck on GIGO. The secret sauce to any superintelligent model is going to be the proprietary data. In Meta's case that's Facebook, Instagram and WhatsApp even if it's just a small subset of overall training data. If Meta's Llama models are all about the same data every other model uses there's no value add. 

For what it's worth, I have the same GIGO concerns about Grok no matter how it seems to impress me with its responses. Why? I know there's X data in there somewhere. 

Wang said the opportunity to lead Meta's AI efforts was a once in a lifetime opportunity.

Scale AI will name Jason Droege interim CEO. Droege has strong background and it wouldn't be surprising if Scale AI is the ultimate winner in the end. Wang noted that Meta's investment will be distributed among shareholders and vested equity holders.

Data to Decisions Chief Information Officer

Adobe's AI strategy, monetization 'feels really good right now'

Adobe said its various AI offerings are driving usage and monetization as the company delivered better-than-expected second quarter results.

CEO Shantanu Narayen said AI is becoming a "nice tailwind" for the business and adoption as customers either pay for higher-tier plans for AI or buy individual features.

Narayen breaks down the AI effect as "AI influence revenue"--innovation that drives usage and higher subscription revenue in products like DX, Acrobat and Creative Cloud--and direct revenue from a standalone Firefly app, Creative Cloud Pro and GenStudio.

Adobe has laid out a strategy where it is targeting business professionals and consumers as well as creative and marketing pros. The approach gives Adobe a well-diversified customer base.

Narayen explained:

"The AI influence revenue is already in the billions because that speaks to the value that people are getting across both our DX products, Acrobat products as well as the Creative products. So across the board, there's no question that AI is a nice tailwind as it relates to adoption. And we also said we're tracking ahead of the $250 million of ARR."

The immense opportunity is all ahead of us. And as we get this entire offering that we keep talking about, which is Acrobat; Express; Firefly single app; Creative Cloud Pro, which includes Firefly; GenStudio; and the AEP and apps, each one of them, we think, has a tremendous opportunity ahead of us. So it's very early in terms of the AI monetization, but we're very advanced in terms of how much innovation we've delivered. And so it feels really good right now."

That AI effect is showing up in Adobe's earnings report, which appears to have satisfied Wall Street. In most quarters, Adobe reports strong financials and gets walloped afterward due to monetization worries or fears about trailing in AI.

Rest assured those worries are still there. Adobe executives were repeatedly asked about competition from Meta and other model disruptors to Creative Cloud, Adobe Express and the rest of the portfolio.

David Wadhwani, general manager of Adobe's Digital Media business, said the company is moving key apps like Firefly to mobile and it is driving its model based on data and the web journey optimization it offers enterprise. The lessons from Acrobat's AI upsell are being applied elsewhere. "We onboarded into 8,000 new businesses in the quarter with Express," he said.

Narayen said the Adobe strategy has been to drive adoption of key AI tools like Firefly and Acrobat AI Assistant and then drive monetization. Ultimately, Creative Cloud Pro will have the bundle of Adobe's best AI features. The Adobe CEO also said Adobe is seeing strength in its marketing offerings too and is automating workflows. "The North Star is the combination of creativity and productivity driving growth for us," he said.

The numbers

Adobe reported second quarter earnings of $1.69 billion, or $3.94 a share, on revenue of $5.87 billion, up 11% from a year ago. Non-GAAP earnings were $5.06 a share. The results and outlook topped Wall Street estimates. 

  • Digital Media revenue was up 11%.
  • Digital Experience revenue was up 10%.
  • Business professional and consumer group subscription revenue was up 15%.
  • Creative and marketing professional group revenue was up 10%.

As for the outlook, Adobe said third quarter revenue will be between $5.87 billion and $5.92 billion with non-GAAP earnings of $5.15 a share to $5.20 a share.

For fiscal 2025, Adobe projected revenue of $23.5 billion to $23.6 billion with non-GAAP earnings of $20.50 a share to $20.70 a share.

 

Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Data Officer

AMD eyes AI inference gains with new Instinct accelerators, GPU, open rack systems

AMD launched new Instinct MI350 Series accelerators, previewed Instinct MI400 Series GPUs and outlined its next-gen AI rack systems that integrate the company's stack.

The upshot is that AMD is going after AI workloads as inferencing has ballooned and models for multiple use cases have proliferated.

Speaking at AMD's Advancing AI 2025 conference, CEO Lisa Su said training is critical to developing for developing models but there's a big picture. "We are seeing an explosion of models, especially models for specific uses such as coding, healthcare and finance," said Su. "Over the next few years we expect hundreds of thousands, and eventually millions of purpose-built models each tuned for specific tasks, industries or use cases."

That selection of models and use cases will drive compute requirements.

Here's what the company launched at its event:

  • AMD launched its Instinct MI350 series, which has 4x more performance than its predecessor and up a 35x gain in inferencing performance. Those AI accelerators come in air-cooled and direct liquid-cooled options.

A close-up of a computer

AI-generated content may be incorrect.

  • AMD previewed its upcoming Instinct MI400 Series GPUs with a 10x performance increase.
  • The next-gen Helios AI rack infrastructure was also previewed. That rack system is optimized for AI workloads and integrates M1400 GPUs, EPYC CPUs and Pensando NICs.
  • AMD is also building out its software stack led by its ROCm platform. The company touted 3.5x inference gains in its upcoming ROCm 7 release.
  • ROCm can run more than 1.8 million Hugging Face models.
  • The company launched its AMD Developer Cloud with ROCm and AMD GPU access.
  • AMD also touted traction for its Instinct GPUs with cloud providers such as AWS, DigitalOcean, Meta, Microsoft and Oracle Cloud. System giants such as Dell, HPE and Supermicro are also building out with AMD Instinct MI350 Series GPUs.
  • AMD's Su laid out a roadmap through 2027 that not only includes processors but open rack-scale designs that'll be used by hardware partners. AMD recently sold the manufacturing operations of ZT Systems.

A black server tower with blue lights

AI-generated content may be incorrect.

With the launches, AMD is making a play for inference workloads. Nvidia is best known for training workloads, but also has a big footprint in AI inferencing. AMD is looking to be a counterweight to Nvidia and also gain share as the AI total addressable market expands.

Indeed, AMD brought along some powerful references including Meta deployed Instinct MI300X for Llama 3 and Llama 4 inference. OpenAI CEO Sam Altman touted its AMD partnership as did Oracle, HUMAIN, Microsoft, Cohere and others.

Data to Decisions Tech Optimization AMD Chief Information Officer

Databricks natively integrates Google Cloud Gemini models

Databricks said that Google Cloud's Gemini models will be available natively within its Databricks Data Intelligence Platform. Databricks also said that it extended a partnership with Microsoft Azure.

The deals were announced at Databricks Data + AI Summit.

According to Databricks, Gemini models will be available for AI agent use cases within Databricks. The companies said their expanded partnership will enable customers to deploy Google Gemini 2.5 models without moving data.

Gemini models will be available to Databricks customers directly through SQL queries and model endpoints without data duplication or integrations. Enterprises can pay for Gemini usage through the Databricks contract.

Key points about the Google Cloud-Databricks partnership:

  • Gemini models will be available and billed through Databricks.
  • Gemini models can be used to create AI agents using datasets within Databricks.
  • Databricks Unity Catalog will handle governance and compliance.

Separately, Databricks said it expanded a partnership with Microsoft on Azure Databricks, which has been available since 2018.

Microsoft and Databricks have rolled out a series of native integrations between Azure Databricks, Azure AI Foundry and Microsoft Power Platform as well as SAP Databricks on Azure.

Data to Decisions databricks Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Oracle Cloud’s annual revenue run rate exiting Q4 nears $27 billion

Oracle's cloud infrastructure business posted revenue growth of 52% in the fourth quarter as its results were better than expected.

The company reported fourth quarter earnings of $1.19 a share on revenue of $15.9 billion, up 11% from a year ago. Non-GAAP earnings were $1.70 a share.

Wall Street was looking for Oracle to report fourth quarter earnings of $1.64 a share on revenue of $15.59 billion.

Perhaps the biggest takeaway in the quarter is that Oracle's infrastructure-as-a-service unit is closing in on the company's SaaS revenue. In the fourth quarter, Oracle Cloud Infrastructure (OCI) had revenue of $3 billion, up 52%, and the SaaS unit delivered sales of $3.7 billion, up 12%.

Oracle's cloud revenue including software and infrastructure was $6.7 billion, up 27% from a year ago.

For fiscal 2025, Oracle reported earnings of $12.4 billion, or $4.34 a share, on revenue of $57.4 billion, up 9% from a year ago.

Oracle CEO Safra Catz said fiscal 2025 was a good year, but fiscal 2026 should deliver "dramatically higher" revenue growth rates. She noted that fiscal 2026 total cloud growth rate should be more than 40%, up from 24% in fiscal 2025. "Oracle is well on its way to being not only the world's largest cloud application company—but also one of the world's largest cloud infrastructure companies," she said.

CTO Larry Ellison said that "multicloud database revenue from Amazon, Google and Azure grew 115% from Q3 to Q4." He added that OCI has 23 multicloud data centers live with 47 on tap in the next 12 months. Ellison added that triple-digit revenue growth for multicloud should continue.

Oracle's cloud business is on an annual run rate approaching $27 billion. Google Cloud is more than $49 billion on an annual run rate. AWS is more than $117 billion. Microsoft Cloud's annual revenue run rate was more than $169 billion including sales across all units. The annual revenue run rate for Microsoft Intelligent Cloud, which includes Azure, is more than $107 billion.

Highlights from the earnings call include:

  • Catz said fourth quarter capital spending was $9.1 billion with the "vast majority of our CapEx in revenue generating equipment that was going into data centers and not for land or buildings."
  • In fiscal 2026, Catz said capital expenditures will be more than $25 billion "to meet demand from our backlog."  
  • Cloud infrastructure revenue growth will grow more than 70% in fiscal 2026, said Catz. 
  • For the first quarter, Oracle projected total cloud revenue growth of 26% to 30%. Non-GAAP earnings for the quarter will be between $1.44 and $1.48 a share. 
  • Ellison said Oracle has developed an integrated AI agent suite for ERP, supply chain, manufacturing, human resources, customer engagement and industry applications. Ellison said Oracle's database has been critical to building out AI agents. 
  • "These other companies say they have all the data, so they can do AI really well. They can build all these AI agents on top of all of that data," said Ellison. "The only problem with that statement is they don't have all the data we do. We have most of the world's valuable data. The vast majority of it is in an Oracle database. Our applications take all of your application data and make that data available to the most popular AI models."
  • Ellison said Oracle's suite approach to applications and AI agents is resonating. "Companies don't really enjoy buying applications from five different vendors and then making all of those applications work together," said Ellison. "We're seeing a lot of companies buying those basically saying, I'm going to go all Oracle. I'm going to buy the complete Oracle suite."
  • "Our intent is to use our biggest customers, a one stop shop by the entire suite to run their enterprise from us. And that gets rid of a lot of headaches. Everything is in the same database. Everything comes with the same AI data platform. With it, all the analytics are there. You don't have to do the system integration," said Ellison.  
  • Oracle still has more demand than it can meet. "I am still in a position where our supply is not meeting our demands," said Catz. "We actually currently are still waving off customers from or scheduling them out into the future so that we have enough supply to meet demand."

Constellation Research's take

Holger Mueller, an analyst at Constellation Research, said:

"Oracle is doubling down on IaaS revenue and benefitting from the popularity of the multicloud offering of the Oracle Database. This marks the first quarter where Oracle even accepted a negative cash flow. It is clear where Safra Catz and Larry Ellison see the opportunity.

In the concluded fiscal year alone the CapEx tripled from $7 billion to $21 billion in Q4. For a long time Oracle settled CapEx at 50% of free cash flow - no more. Last fiscal year's  accumulated CapEx was about $27 billion, and Q4 alone was $21.2 billion. 

How long can Oracle afford that investment level into data centers? But Oracle is more about infrastructure than ever before - and the only vendor who was relevant in the 1990s to make that transition. It will be key to show growth in regions (all have grown YoY) beyond its Americas region. Many international data centers are just opening now."

 

Data to Decisions Tech Optimization Oracle Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

JPMorgan Chase's Dimon on AI, data, cybersecurity and managing tech shifts

JPMorgan Chase CEO Jamie Dimon said artificial intelligence shouldn't be a part of the technology org since it impacts all of the business. Dimon also gave his take on data, cybersecurity, management and technology shifts.

Speaking at the Databricks Data + AI Summit, Dimon in an interview with CEO Ali Ghodsi said:

"We took AI and data out of technology. It's too important and technology does a great job and a deep partner. But we put AI at that management table. Dr. Manuela Veloso, Head of JP Morgan Chase AI Research reports to me and our president. Are we doing enough? Are we doing it right? There will be no job, no process, no function that won't be affected by AI--mostly for the positive. It's about getting all of the people who run these businesses to understand the power of it."

JPMorgan Chase has an $18 billion IT budget and Dimon added that data is everything. "We buy and sell 3 trillion of securities every day," said Dimon.

Dimon said the head of AI is at every single meeting he has with management teams so the bank can stay on top of things. "We have investments in 100 different companies out here. We're testing and learning," he said.

The results so far have been impressive.

Other takeaways from Dimon's talk:

Starting in AI. "In 2012, we first used Palantir. And I remember sitting down with the Palantir people and going through this AI thing, saying, Holy Christ. This is unbelievable. So we, right after that, start our own department. In 2014, we hired Veloso from Carnegie Mellon. We now have 600 actual end use case and that number will probably double next year."

The data foundation. JPMorgan Chase's Dimon, a long-time Databricks customer, said the data foundation is critical. "Data is the hardest part. It isn't the AI or machine learning," said Dimon. "Getting the data in the form that's usable is the hard part."

AI isn't deflationary yet. Dimon said JPMorgan Chase is spending $2 billion on AI and getting returns, but it's too early to say AI is deflationary.

"The chips are getting faster and cheaper, and maybe optical chips to get better down the road. But power costs aren't going down. The cost of land isn't going down. The cost of structures isn't going down. You know, the cost of inferencing is dropping dramatically, and you guys will find a million ways reduce costs. My own view is, in the next couple of years, we're gonna be spending a lot more money. We need more lumber and more cement and more steel and more grids and more gas plants. We will be deflationary, just not quite yet."

Technology shifts. Dimon said technology has always changed humanity. "I've always had technology at the table and part of the management team. The tech people have to be business literate. They have to understand your problems," said Dimon. "Business people need to be tech literate. I don't have to understand how lithium battery works. I don't have to understand exactly how your super agents work. I have to understand what they can do so I can deploy it. I tell people we're going to be the best at AI--large models, small models, this cloud, that cloud. Just use the technology to do a better job or you'll be left behind."

Cybersecurity. Dimon said cybersecurity is "the thing that scares me the most." "People are now using agents to try to penetrate major companies," said Dimon. "The bad guys are already using AI and agents. The cyber warfare is here in our computers, skies, satellites and wireless. I don't think a lot of Americans realize."

Dimon added that cybersecurity is a public-private partnership.

"We often inform the government before they know what's going on with certain things about North Koreans or Chinese actors out there. I remember President Obama asked me years ago, what do I do if a bank went down. I said you should have to have a bank holiday. You would have no choice, but to try to recover the data. We all have backed up data and all the things like that. But if I had to tell you all how many times the failsafe systems didn't work in my life, it would be almost every time. It didn't always anticipate what caused the problem and maybe AI will help us do a better job of that."

Management. Dimon was an early mover on return to office planning and walked through his approach to managing and the goals of having people in person at the office. A few quotes:

  • "You have to go out of your way to get the best of people. And it's amazing what it does for a country, university and a company."
  • "You should fire the assholes. It only takes a few of them to destroy a meeting, and sometimes those assholes are customers. I have fired customers because they're just so rude to our people."
  • "You gotta have a little bit of grit. It's hard. You know, these jobs are hard. You have to, you know, the answers to problems found. So my general reaction when something's going wrong, I get all the people the room, we work it over and over and over and over. I'm not guessing. And usually the answer is found."
  • "It's hard to do management by Hollywood Squares, because I can have real honest conversation with you. I'm sitting in front of you. I didn't realize that people are looking at their phones rather doing you. If your notifications are coming up, your emails are coming up, you're not paying attention. You know, when you're with me, you get 100% of my attention. 100% of the time I've done the whole pre read. You have to have certain disciplines, or you lose. It's how management teams have to work."

A presidential run? "I'd get all the rich white people to vote for me, but I am a banker. I'm 69 and I always say I would love to be president, but you have to anoint me. I've never run for office. I don't have that skill. Now maybe if you tell me what to do, what to say, and where to go, but I've never done that, so I think the answer is no. I will do anything to help the country," said Dimon.

 

New C-Suite Data to Decisions Future of Work Big Data Chief Executive Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

CR CX Convos: Live from PegaWorld 2025 with Tara DeZao

AI isn't about replacing marketers - it's about empowering them. Constellation analyst Liz Miller sits down with Pegasystems product marketing whiz Tara DeZao to discuss marketing transformation through AI partnerships and ushering in the next wave of collaborative CustomerExperience.

Key takeaways:

📌 AI helps overcome the 'blank page' challenge
📌 Authenticity remains at the heart of great marketing
📌 Decisioning trumps data overwhelm
📌 Customer journeys need orchestration, not rigid paths

Watch the full conversation to learn more!

On cx_convos <iframe width="560" height="315" src="https://www.youtube.com/embed/utqIrUuLpUo?si=iouJplHL0EGpYQ7v" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Databricks launches Mosaic Agent Bricks, Lakeflow Designer, Lakehouse

Databricks launched Mosaic Agent Bricks, a workspace for creating AI agents that are production ready, accurate and cost efficient, Lakeflow Designer and Lakehouse, a transactional database.  

Agent Bricks advances the Databricks approach with Mosaic AI, which can build agent systems delivering domain-specific results, and aims to move AI agents into production. Agent Bricks will automatically generate domain-specific synthetic data and task-based benchmarks.

Databricks CEO Ali Ghodsi, said Agent Bricks is a "new way of building and deploying AI agents that can reason on your data." Databricks kicked off its Data + AI Summit in San Francisco.

Databricks said Agent Bricks will help build scalable AI agents that don't hallucinate, evaluate what's a good result and build a synthetic data set to mirror customer data. Agent Bricks are auto optimized.

With Agent Bricks, customers will be able to describe a high-level problem and Databricks will create LLM judges, generate synthetic data and auto optimize to create a grounding loop.

Databricks cited Astra Zeneca and Hawaiian Electric as early adopters of Agent Bricks that moved from point-of-concept to production in minutes from days.

By taking the guesswork out of creating production AI agents, Databricks is looking to scale agentic AI as well as drive consumption of its data platform.

Key points about Mosaic Agent Bricks:

  • Agent Bricks generates task-specific evaluations and LLM judges to assess quality.
  • Synthetic data is created that looks like customer data to supplement learning.
  • Agent Bricks uses multiple techniques to optimize agents.
  • Customers can balance quality and cost for agent results.
  • Use cases for Agent Bricks includes information extraction, knowledge supplementation, customer LLM agent and multi-agent supervision.

Databricks' announcements landed a week after Snowflake Summit 2025.

A common theme from both Databricks and Snowflake was that data platforms and AI are increasingly connected and that database technology was built for a different era. Both Databricks and Snowflake have doubled down on Postgres as a base for new AI applications. The general idea is to get data to AI applications in the lowest cost and efficient manner.

In addition, Databricks is looking to combine data and AI so enterprises can define objectives using natural language and then the platform handles the rest of the process--data prep and features; build models with fine tuning; deploy tools, retrieval models and agent sharing; evaluation (both automated and human; and governance and modeling. Databricks' big argument is that data intelligence needs to touch every application with analytics, AI and database.

Databricks also launched Mosaic AI support for serverless GPUs and MLflow 3.0, a platform for managing the AI lifecycle.

Lakeflow Designer

Separately, Databricks launched Lakeflow Designer, a no code to code first pipeline so builders have a common language.

Lakeflow Designer is backed by Lakeflow, which is now generally available and has no-code connectors that can create pipelines with a single line of SQL. Lakeflow Designer features no-code ETL with scale, access control and AI support.

Key items about Lakeflow Designer include:

  • Lakeflow Designer has a drag-and-drop UI so business analysts can build pipelines as easily as data engineers.
  • Lakeflow Designer is backed by Databricks Lakeflow, Unity Catalog and generative AI features.
  • Lakeflow Designer will be launched in private preview.

Constellation Research analyst Michael Ni said:

"This isn’t just about scale—it’s about unlocking the 90% of questions that never make it to engineering. From campaign lift tracking to territory planning, Lakeflow Designer lets business teams define and ship data products using no/low-code tools that don’t get thrown away. Lakeflow Designer is the Canva of ETL: instant, visual, AI-assisted—yet under the hood, it’s Spark SQL at machine scale. The business analyst designs, and the data engineers can review and tweak collaboratively with the analyst. The engine industrializes it with full transparency and trust."

In addition, Lakeflow and Lakeflow Designer will rhyme with Snowflake's OpenFlow. "Lakeflow and OpenFlow reflect two philosophies: Databricks integrates data engineering into a Spark-native, open orchestration fabric, while Snowflake’s OpenFlow offers declarative workflow control with deep Snowflake-native semantics. One favors flexibility and openness; the other favors consolidation and simplicity," said Ni. 

Databricks eyes transactional data with Lakebase

Databricks also announced Lakebase, a transactional database engine where data is stored in low-cost lakes for easy access to AI applications.

Lakebase is Databricks effort to address what databases need to do for AI applications. Databricks argued that Lakebase is designed for AI due to the following characteristics:

  • Lakebase has separate compute and storage, which creates very low latency, high queries per second and 99.999% uptime.
  • The Lakebase is built on open source Postgres that supports community extensions.
  • Lakebase is built for API since it launches in less than a second, gives customers the ability to pay for what they use and can manage changes well.
  • Lakebase runs on Postgres OLTP Engine and shares DNA with Neon and has fully managed pipelines for data sync.

Databricks also announced the following:

  • Lakebridge, a tooling set that's free and aimed at predictable migrations. Lakebridge features a warehouse profiler, code converter, data migration and validation with support for more than 20 legacy data warehouses.
  • Databricks Apps, which are governed data intelligence apps on Databricks, are generally available.
  • Unity Catalog Metrics, which defines metrics in one place and provides dashboards and notebooks across an enterprise. Unity Catalog Metrics also works with AI/BI Genie to promote novel questions in certified semantics. 
  • Databricks One, a version of Databricks designed for business teams that's in public preview. Databricks One has an intuitive customer experience, simple administration and Unity Catalog.
  • Community Edition: Databricks Community Edition was updated and includes most features. Developers can learn and experiment with data and AI use cases and Databricks is spending $100 million on programs for education.

Ni added:

"We’re entering a new era where data clouds and hyperscalers are racing to establish themselves as the dominant platform for AI-driven decision-making in their respective markets. The competition is no longer about warehouse performance—it’s about who owns the semantic layer, who governs the agent lifecycle, and who enables the next-gen data app ecosystem. With Lakebase, Agent Bricks, and Unity Catalog metrics, Databricks is asserting that ownership more broadly than ever before."

Data to Decisions databricks Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Nvidia adds AWS, Microsoft Azure to DGX Cloud Lepton marketplace

Nvidia expands its DGX Cloud Lepton GPU marketplace with the addition of AWS and Microsoft Azure to its roster of providers.

DGX Cloud Lepton is a marketplace that unified Nvidia GPU resources across providers and regions. The marketplace is also integrated with Nvidia's AI stack for microservice containers, multiple large language models and management.

At Nvidia GTC Paris, the company said its global compute marketplace, launched at Computex, is adding a bevy of EU providers including Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI. Nvidia CEO Jensen Huang said DGX Cloud Lepton is "connects developers to GPU compute powering a virtual global AI factory."

AWS and Microsoft Azure will be the first large-scale providers contributing Nvidia Blackwell and GPUs to DGX Cloud Lepton. CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda and Yotta Data Services are already participating in DGX Cloud Lepton.

In addition, Nvidia said that Hugging Face will roll out Training Cluster as a Service integrating with DGX Cloud Lepton so researchers and developers can tap into GPU compute. Mirror Physics, Project Numina and the Telethon Institute of Genetics and Medicine will be among the first Hugging Face customers to access Training Cluster as a Service, which uses DGX Cloud Lepton for compute.

Nvidia also said it is working with European venture capitals firms Accel, Elaia, Partech and Sofinnova Partners to provide up to $100,000 in DGX Cloud Lepton credits to startups.

Enterprises with early access to DGX Cloud Lepton include Basecamp Research, EY, Outerbounds, Prime Mente and Reflection.

Also see:

Data to Decisions Tech Optimization nvidia Chief Information Officer