Results

AWS re:Inforce 2025: How customers are using AWS security building blocks

AWS re:Inforce 2025: How customers are using AWS security building blocks

For Amazon Web Services customers, the continuum between security services and the rest of the cloud vendor's portfolio covering storage, data, AI and compute tends to blend together.

The primary takeaway from re:Inforce 2025 is that AWS isn't trying to make money in security. Yes, AWS has security products like GuardDuty, Inspector and SecurityHub sold separately, but a lot of AWS security additions are built into existing services.

What's clear is that AWS has a lot of visibility into its platform data and can handle emerging threats with AI. Whether security innovation turns into a separate product or handy feature in an existing service, say Amazon Bedrock, falls into the to-be-determined category.

These customer vignettes from AWS re:Inforce highlight the cloud provider's security strategy that is more about providing building blocks where security is built-in, but not necessarily the main show.

Comcast: Three North Stars

Noopur Davis, Comcast's Global CISO, Chief Product Privacy Officer, said the company has been an AWS customer for six years. Comcast has adopted AWS frameworks and used the cloud provider's building blocks to enable its developers build in cybersecurity throughout the software development lifecycle.

"We take a long-term view of security. Our first North Star is privacy, followed by data security and operate on zero trust," said Davis, who oversees a team of 2,000 cybersecurity pros.

Davis added that Comcast began integrating generative AI in April 2023 and had to develop frameworks for models, the data feeding models and guardrails.

Comcast has developed an AI workbench that will build in security across the software development lifecycle. According to Davis, security and data practices are comingled and part of a broader transformation effort.

Davis outlined the following security projects:

  • The company is using AI and its data for threat modeling, pen testing and code remediation.
  • Comcast is integrating security into its development workbenches.
  • AI bots are used for governance, regulatory and compliance processes.
  • Comcast has developed on AI-enabled risk engine across its platform.

For Comcast, using AWS for data pipelines and modeling has improved the company's ability to use AI for cybersecurity.

Comcast is a partner and as well as a customer. At re:Invent 2024, the cable giant said it was shifting its 5G wireless core services to AWS. In 2023, Comcast took its internally developed cybersecurity data fabric commercial with AWS and Snowflake via a product called DataBee.

DataBee, which uses AWS compute and storage, was the first product from Comcast Technology Solutions new cybersecurity division formed in 2022.

CarGurus: Data protection, data lineage, AI governance

Kelly Haydu, VP Information Security, Technology & Enterprise Applications at CarGurus, said the auto marketplace is powered by data. "Data is the fabric of a company," said Haydu, who added that securing that data requires understanding data flows incoming and outgoing.

CarGurus was an early adopter of generative AI and began incorporating the technology two-and-a-half years ago. As a result, CarGurus had to develop AI risk management and governance strategies early.

In 2022, CarGurus moved to AWS as the preferred cloud provider. To secure data, CarGurus relies on Cyera, a startup focused on data protection. CarGurus has also created a bevy of security frameworks including some that evaluate emerging technologies as markets change.

Haydu said:

"We have a great handle on our data today, but as AI starts to come on more and more I need to know when the piece of technology is introduced. When the machines are talking to the machines and the agents are talking to the agents, we needed to have an automated way to understand the data sources from inception to at rest in our databases, and that's called data lineage. Understanding the life cycle of that data will continue to be important as more data comes into our ecosystem in the upcoming years."

Going forward, CarGurus will continue to evolve as the company expects to ingest more data and leverage AI and automation as it scales. Haydu said data security has four evolving categories:

  • AI and automation. "What we're going to see in the future is more dynamic, real-time analysis of the data coming in from platforms," said Haydu. "It's going to be about contextual analysis with machines versus having a scan that has to happen."
  • Integration across data fabrics.
  • Risk management. Haydu said risk management is going to focus on individuals and their access, environments with data sensitivity.
  • Zero trust architecture, which will be adopted into platforms.

Santander: Addressing risk, security frameworks, regulators

Jamie Nash, Head of Technology and Operations Risk at Santander, implemented a digital banking platform called Open Bank in 18 months with AWS and Deloitte.

She said it's critical to have a security framework that can be shared with regulators. Nash added that data sovereignty remains a big concerns for financial institutions.

Nash said Santander used AWS Risk and Compliance Navigator Framework as the primary blueprint. Santander had an existing risk framework that was adapted for cloud computing with the help of AWS and Deloitte. The move took traditional banking security controls to cloud equivalents.

Santander addressed data sovereignty by doing the following:

  • Keeping all customer data within the US.
  • Architecting Open Bank so it could maintain compliance.
  • Keeping existing customer reference data on-prem instead of moving to the cloud.
  • Open Bank was focused on new customers to avoid data migration issues.
  • As for regulators, Nash said Santander was in early communication about open bank to speed up approvals.

"The biggest concern was the regulatory component and it was the most daunting aspect, because we are obviously very highly regulated," said Nash. "We don't see a lot of banks moving their full tech stack in the cloud in one fell swoop. We had to explain and educate our regulators."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Salesforce raises prices across multiple products including Slack

Salesforce raises prices across multiple products including Slack

Salesforce has raised prices across multiple products as it continues to hone pricing for AI and Agentforce.

In a post, Salesforce outlined the moving pricing parts. The latest changes come after the recent launch of Flex Credits, a consumption-based model, along with flexible Agentforce pricing.

According to Salesforce, new Agentforce add-ons and Agentforce 1 Editions are generally available for Sales Cloud, Service Cloud, Field Service and Industries Cloud. Those plans replace Einstein add-ons and Einstein 1 Editions.

In addition, Salesforce said Enterprise and Unlimited Edition prices are going up Aug. 1 and Slack plans are seeing price increases.

For Slack, Salesforce is including new AI features for all paid plans. Salesforce channels will be added to all Slack plans. The upshot here is that Salesforce sees Slack as a user interface across its platform.

As for Salesforce customers, there will be a bit of number crunching as license go up and the company works in consumption-based models.

Here's a look at some of the moving pricing parts:

  • New Agentforce add-ons will start at $125 per user per month. These add-on plans include pre-built Agentforce templates by role and industry, unlimited use of genAI and Agentforce for employees for licensed users, access to integrated AI capabilities, and analytics via Tableau Next.
  • Agentforce 1 Editions start at $550 per user per month. These plans include what's in Agentforce add-ons, 1 million Flex Credits per org per year, the ability to swap licenses with Flex Credits and Data Cloud with 2.5 million Data Services Credits per org and new Slack Enterprise +.
  • Enterprise Editions and Unlimited Editions for Sales Cloud, Service Cloud, Field Service, and select Industries Clouds will go up 6% on Aug. 1. Salesforce Foundations, Starter, or Pro Editions pricing is unchanged.
  • Salesforce said Slack's Business+ plan is now $15 per user per month, up from $12.50 per user per month, and adding a new Enterprise+ plan, which includes enterprise search. If you're a Salesforce Business+ subscriber on a monthly billing cycle the new price is $18 per user per month.
  • Salesforce is also rolling out Slack across Salesforce channels and various records. Slack Pro pricing will stay the same. The last time Slack customers saw a price increase was 2022.
Data to Decisions Future of Work Marketing Transformation Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS re:Inforce 2025: SecurityHub, AI and proactive defense

AWS re:Inforce 2025: SecurityHub, AI and proactive defense

AWS Chief Information Security Officer Amy Herzog said cybersecurity is becoming more about the time to act than time to detect due to the sheer scale of incidents and alerts. Herzog pitched a more active defense approach amid emerging AI use cases.

Speaking at AWS re:Inforce 2025 in Philadelphia, Herzog's message echoed what was heard previously from AWS about security by design. The twist is that AWS is rolling up its various security services into one package that can automate security processes and various chores that eat up response time.

AWS launched Security Hub to give enterprises the ability to be more proactive about security as AI-driven attacks emerge.

AWS Security Hub provides a unified cloud security system that combines threat detection, signals and simplified and prioritized alerts. "Security Hub combines signals from across AWS security services and then transforms them into actionable insights, helping you respond at scale," said Herzog.

In a demo, Herzog walked through how Security Hub can aggregate everything across AWS' various security tools. Security Hub is also designed to alleviate alert fatigue. "It's about time to act more than the time to detect through automated correlation, rich context and actionable insights," said Herzog.

Herzog said that companies with more mature security and compliance frameworks are better suited to adopt generative AI and ultimately agentic AI. One key message: Security is an enabler not a blocker to AI innovation. 

AWS also touted how it is using generative AI to perform code patches and automate processes. The upshot here is that AWS is providing multiple security services even as it uses its AWS Marketplace to connect customers to a big ecosystem of vendors. What AWS is doing now is connecting those various security building blocks and automating various security workflows.

The cloud giant also launched a proactive network security analysis tools for AWS Shield and Amazon GuardDuty's expansion into container-based environments.

Here's a look at what was announced:

Identity and Access Management (IAM): AWS IAM Access Analyzer internal access findings is available to find out who has access to S3 buckets and other services.

"You can use the internal access guidance to see exactly who in your company has access to specific resources and information from one dashboard. You can monitor both internal and external access in one view."

AWS IAM, which handles 1.2 billion API calls per second, is also adding long-term credential management, data protection and access and control. The general idea is that there are no long-term credentials.

Multifactor authentication is also 100% enforced across the AWS security layer.

AWS Certificate Manager with exportable public certificates. Herzog said "we know that management of digital certificates is a challenge" so the company is now enabling certificates to run inside AWS as well as outside.

AWS Shield Network Security Director in preview: "Network Security Director starts by performing an analysis of your network, building anthropology based on the resources connections, networking services, which has been implemented, it then assesses the network security of your resources and whether they meet the which network security best practices," said Herzog.

Herzog said the idea is that AWS Shield Network Director is aimed at giving enterprises a security team built in with a simplified experience.

AWS Network Firewall Active Threat Defense: Herzog said the cloud provider is aggressive with its defenses. She walked through AWS systems to defend against emerging threats. One service behind the active threat defenses, called Blackfoot, constantly checks packets for bad actors.

"Blackfoot gives us the data plane to stop their activities using Blackfoot, and we've implemented custom packet processing," said Herzog, who said Blackfoot has stopped 2.4 trillion malicious requests over the last six months.

Amazon GuardDuty: Herzog said Amazon GuardDuty is getting enhanced features to find anomaly behaviors, sequences and signals. AWS inspects 360 trillion telemetry events per day. Amazon GuardDuty identified 13,000 high confidence attack sequences over the last 90 days.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Databricks Summit 2025: The Lakehouse Becomes a Decision Platform

Databricks Summit 2025: The Lakehouse Becomes a Decision Platform

Databricks is no longer just a lakehouse. It aims to be an end-to-end decisioning platform—one who knows the meaning behind the data.

The evolution of the Databricks Summit—from Spark to Spark+AI to now, simply Data + AI—is more than a name change; it’s a mission statement. This year’s event was a declaration that the era of the standalone analytics platform is over. Databricks is making a big, multi-front play to become the single, unified platform for enterprise decision-making, aiming to own the entire intelligence lifecycle from raw data to the application interface.

For CDAOs and analytics/AI leaders, this raises a crucial question: Can your current stack evolve from storing data to operationalizing intelligence?

Below, we break down what’s new and the strategic questions every enterprise should be asking.

1. From AI/Analytics Platform to Decision Platform

Databricks is no longer content to be just the lakehouse layer. With Lakebase, Databricks One, and Unity Catalog Metrics, it has taken on systems of record and is now moving upstream—powering operational systems, governed metrics and analytics, and GenAI interfaces for business teams.

What’s New?

  • Lakebase: A transactional Postgres engine built on Delta, optimized for agents and app data.
  • Databricks One: No-code interface for dashboards, copilots, and decision apps.
  • Unity Catalog Metrics: Certified business metrics reusable across BI tools, apps, and agents.

Why It Matters This is not just about unifying OLTP and OLAP; Databricks is establishing a robust, self-reinforcing ecosystem of unified data, contextual user interfaces, and trusted business insights.

  • The lakehouse supports real-time operational workloads, agentic applications, and having changes in operational data available for analytics in real-time, and vice versa.
  • Empowers business users to review and “ask questions of their data (with Genie)” and act on governed insights without coding or additional Business Intelligence (BI) tools.
  • Solves “multiple versions of truth” by unifying metrics supported by increasingly automatically enriched semantics that learn from data use.

The CDAO Question: Is my current architecture built to store data, or to make and automate decisions? How much is the silo between my operational and analytical systems costing me in time, money, and misalignment?

2. Agent Bricks: Moving from Pilots to Production

Every enterprise is testing GenAI—but most are stuck in “pilot purgatory.” Agent Bricks is Databricks’s effort to industrialize agent development with evaluation, cost tuning, and grounding—all built into the platform.

What’s New?

  • LLM-as-Judge: Custom evaluation frameworks for task-specific benchmarks beyond generic model leaderboards.
  • Optimization layer: Tunes model selection and behavior for cost vs. quality.
  • Synthetic data generation: Identifies and fills gaps in training sets using governed enterprise data.
  • Grounding loop: Ensures enterprise context, human-in-the-loop review, and retraining to improve over time.

Why It Matters

  • Lowered barrier to entry and de-risking AI investments by "SaaSifying" the complex process of agent creation and grounding it in observability.
  • Embeds cost control, performance tuning, and compliance into the agent lifecycle.
  • Move beyond building models to focusing on observability and deployment for automated “decision-makers.”

The CDAO Question: Am I still measuring my AI initiatives on model performance and accuracy, or do I have a clear framework to evaluate their direct, quantifiable impact on business outcomes and ROI?

3: Pipeline Productivity Without Compromising Governance

For all the agent and OLTP talk, the biggest applause was for a long-standing problem: pipelines. Lakeflow GA and the announcement of Lakeflow Designer promised to deliver speed and control for ingestion, transformation, and data flows across business and engineering teams.

What’s New?

  • Lakeflow Designer: Drag-and-drop and GenAI-assisted pipeline builder for analysts that compiles down to Spark SQL, and engineers can edit with changes reflected in the UI.
  • Unity-native governance: Pipelines output production-grade Spark code with CI/CD support
  • Spark Declarative Pipelines: Formerly known as Delta Live Tables (DLT), has been open-sourced and contributed to Apache Spark as a new industry standard for defining data pipelines.

Why It Matters

  • Bridges the business/engineering divide, accelerating the leverage of production-ready, version-controlled data, all without creating shadow IT.
  • Eliminates brittle and unmanaged ETL by unifying batch and streaming under one governable transformation layer.
  • Reduces reliance on external ETL tools, such as Fivetran, dbt, and Informatica.

The CDAO Question: How can I empower my business users to innovate faster without sacrificing lineage, testing, or engineering trust?

4. Migration & Ecosystem Consolidation

Databricks is building more than features—it’s removing barriers. Lakebridge and Zerobus reduce glue-code complexity, making switching to the platform easier than ever.

What’s New?

  • Lakebridge: A free, LLM-powered migration tool from 20+ data warehouse platforms, including Teradata, Oracle, and Snowflake.
  • Zerobus: Real-time ingestion into Unity without Kafka/Kinesis-style message bus overhead.
  • App Framework Expansion: Retool, Gradio, and Streamlit apps deploy natively inside Databricks.

Why It Matters

  • Cuts time and cost in migrating from Oracle, Teradata, and Snowflake.
  • Simplifies real-time architectures by removing the need for specialized engineering teams to manage Kafka or Kinesis for a whole class of high-throughput ingestion use cases.
  • Strengthens Databricks’ position as not just an analytics engine, but an app platform

Key Questions for Platform Owners: How much of the manual rework and risk can Lakebridge truly eliminate in our complex legacy migrations? Is Zerobus mature enough, and does it have the scope of functionality (e.g., pub-sub) to handle the scope of our mission-critical, real-time production workloads today?

The Wrap: What Every Data & AI Leader Should Ask Now

Databricks’ vision is clear: a single, intelligent platform where data lands, is transformed, is understood, and is acted upon by both humans and AI. While open at different layers, Databricks’ promised simplification and leverage of data centers on Databricks delivering “Data Intelligence,” with Unity Catalog at its core. This forces every CDAO, CAIO, and CIO to move beyond vendor comparisons and confront fundamental questions about their strategy.

  • The Architectural Question: Do we truly need OLTP and OLAP on a single stack—or is separation still more modular and cost-effective?
  • The Readiness Question: Is our organization prepared for a workforce of production AI agents? This requires a new level of maturity around evaluation, governance, and risk that goes far beyond simple chatbot pilots.
  • The Platform Question: Can we consolidate our data platform without giving up best-of-breed tools or flexibility? What are the risks of lock-in?

These are some questions to begin with, pointing to a bigger challenge that is not about technology, but about leadership. The ultimate question is this: As a data leader, am I prepared to drive the organizational and operational transformation required to capitalize on a truly unified platform?

There’s a lot to unpack in the announcements from the data cloud providers and hyperscalers.  Ping me to dig deeper and discuss. Share your thoughts in the comments to continue the conversation.

If you want to learn more
  • Watch LinkedIn Live: Holger Meuller & Mike Ni break down the news from Databricks Data+AI Summit
  • Learn: Databricks on the Core Idea Behind Data Intelligence Platforms
  • Read: Larry Dignan’s breakdown of the Databricks announcements

 

Data to Decisions Innovation & Product-led Growth New C-Suite Tech Optimization Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer

Adobe launches LLM Optimizer, GenStudio and Firefly updates

Adobe launches LLM Optimizer, GenStudio and Firefly updates

Adobe launched LLM Optimizer, an application that aims to enable brands to optimize content and messaging as consumers move from traditional Google search to genAI and LLM-generated summaries.

Adobe's move, announced at Cannes Lions festival, is designed to address generative engine optimization (GEO). Adobe said that GEO is designed to be more than an evolution of SEO but a new approach to digital marketing. Going forward, brands will need to be seen, cited and chosen by large language models.

LLM Optimizer focuses on three core areas:

  • Presence in AI search. LLM Optimizer helps brands ensure content is visible, accurate and influential in AI generated responses. LLM Optimizer tracks how brands show up for specific prompts compared to competitors. These prompts are replacements for keywords.
  • Traffic conversion. LLM Optimizer measures two types of traffic such as LLM Crawler traffic, when AI systems ping sites for information, and referral traffic (users clicking through from AI responses). The tool analyzes how traffic converts to engagement and revenue.
  • Content optimization. LLM Optimizer provides recommendations to improve AI ranking and presence. These optimizations can be deployed with one click via Adobe Experience Manager sites. LLM Optimizer also provides tips to optimized content beyond brand websites to third-party sources including Reddit and Wikipedia.

Adobe LLM Optimizer will fit into existing SEO workflows and support Agent-2-Agent and Model Context Protocol.

The new LLM Optimizer application was one part of a broader Cannes Lions rollout. Here's a look:

Adobe showcased GenStudio for Performance Marketing, an AI-first application that enables marketers to create on-brand content connected to campaigns with guardrails. Adobe is positioning GenStudio as the foundation for all stages of the content supply chain with Firefly models underneath.

GenStudio for Performance Marketing's Cannes release includes limited releases for video ads, non-English generation and announced support for Amazon Ads. Marketo Engage, Adobe Journey Optimizer B2B Edition, LinkedIn integration, Meta Video, Workfront Proof and third party digital asset management are all generally available.

Firefly Services, which will get updates to automate production for asset variations at scale across audiences, channels and regions.

Key Firefly items include:

  • Generate Video APIs are generally available as are APIs for text to avatar, and substance 3D.
  • Firefly creative actions to resize images and reframe video are in beta.
  • Custom models have been added.

Adobe Express updates for advertisers with enterprises features for scale, governance and efficiency. Features include Workfront integration, streamlined review and approval processes, customized home experiences and AI tools to set up brands with one click.

 

Data to Decisions Marketing Transformation Matrix Commerce Next-Generation Customer Experience adobe Chief Information Officer Chief Marketing Officer

Anthropic's multi-agent system overview a must read for CIOs

Anthropic's multi-agent system overview a must read for CIOs

Anthropic outlined how it has built multi-agent systems for Claude Research and CIOs need to read and heed the practical advice and challenges when thinking through AI agents.

In a post, Anthropic's engineering team laid out how the company built a multi-agent system and there's a lot of practical advice on architecture, orchestrating multiple large language models (LLMs) so they can collaborate, and challenges with reliability and evaluation.

Anthropic created a lead agent as well as subagents. What was most interesting about Anthropic's research were the challenges. Your vendor is likely to tell you that there's an easy button for agentic AI, but Anthropic's post gives you some questions to ask about the architecture behind the marketing.

Here's what CIOs should note:

Multi-agent systems can deliver accurate answers, but can also burn tokens quickly. Agents use 4x more tokens than chat interactions and multi-agent systems use about 15x more tokens than chats. "For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance," said Anthropic.

Takeaway: If you use multi-agent systems for tasks where a more simple approach may be justified you're going to get hit with a big compute bill.

Agents can continue even when they get results that are sufficient.

Takeaway: Anthropic said you'll need to "think like your agents and develop a mental model of the agent to improve prompting.

Agents can "duplicate work, leave gaps, or fail to find necessary information" if they don't have detailed task descriptions.

Takeaway: Lead agents need to give detailed instructions to subagents.

Agents struggle with judging the appropriate effort for different tasks.

Takeaway: You'll have to embed scaling rules in the prompts for tasks. The lead agent should have guidelines to allocatee resources for everything from simple queries to complex tasks.

Agents need to use the right tool to be efficient and interfaces between agents and tools are critical. Anthropic used the following example: "An agent searching the web for context that only exists in Slack is doomed from the start."

Takeaway: Without tool descriptions agents can go down the wrong path. Each tool needs a purpose and clear description. Anthropic said also let agents improve prompts by diagnosing failures and suggesting improvements.

Thinking is a process. Anthropic outlined how it extended thinking mode in Claude with multiple process improvements to get from the lead agent thinking to sub agent assignments.

Takeaway: While Anthropic was focused on parallel tooling and creating subagents that can adapt, the biggest lesson here is to think through and understand the process behind how agents do the work.

Minor changes in agentic systems can cascade into large behavioral changes and debugging is difficult (and needs to happen on the fly).

Takeaway: Anthropic created a system that goes beyond standard observability to monitor agent decision patterns and interactions without tracking individual conversations. Observability tools will be critical to any agent platform.

Deployments are going to be difficult with multi-agent systems.

Takeaway: Anthropic said it doesn't update agents at the same time because it doesn't want to disrupt operations. How many enterprise outages will we have because bad code brought down autonomous agent operations?

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

A tour of enterprise tech inflection points

A tour of enterprise tech inflection points

Technology executives are tossing around the term inflection point a good bit when it comes to agentic AI, quantum computing and any other not-quite-ready for primetime technology.

With that in mind here's a tour of tech inflection points to watch. The issue with inflection points is that they don't have time frames. Where relevant I dropped in a time frame on my believability scores.

Quantum computing accelerates

In a week where IBM outlined its quantum computing roadmap to a fault-tolerant quantum system by 2029 and IonQ bought Oxford Ionics for more than $1 billion, Nvidia CEO Jensen Huang said the technology is at an inflection point.

"Quantum computing is reaching an inflection point. We've been working with quantum computing companies all over the world in several different ways, but here in Europe, there's a large community," said Huang. "It is clear now where within reach of being able to apply quantum computing and classical computing in areas that can solve some interesting problems in the coming years."

That’s quite a walk back from comments made in January, but ok.

Huang said every next-generation supercomputer will have a quantum processor connected to GPUs. Nvidia has made its libraries available to quantum systems.

Both IonQ and IBM have big plans to scale quantum computers and network them together.

IBM CEO Arvind Krishna said the company is leaning into its R&D to scale out quantum computing for multiple use cases including drug development, materials discovery, chemistry, and optimization.

At Constellation Research we have a watercooler thread and the debate about quantum heated up about this quantum inflection point. In one corner was Holger Mueller, who has argued it's the year of quantum computing (for the last three years). Mueller said CxOs need to think through quantum computing as part of long-term planning.

Estaban Kolsky, an analyst at Constellation Research and our chief distiller, said there are more real-world technologies to figure out and quantum is a lot of hype.

  1. Mueller vs. Kolsky will be a fun quantum great debate. My take is that there will be a quantum inflection point and it’s closer than you think. Predicting the time frame is another matter entirely.

My inflection point believability on scale of 1 to 10 with a three-year time horizon: 7.

Data platforms and AI converge

The takeaways from Snowflake Summit, Databricks Data + AI Summit and Salesforce's Informatica acquisition are that data platforms and AI are going to converge if agents are going to get work done.

If you've been watching broad AI agent efforts from the likes of AWS, Microsoft Azure and Google Cloud all of them are tethered to data stores and data fabrics.

Given that backdrop, it's no surprise that Snowflake and Databricks are leaning the same way. Databricks appears to be more aggressive.

Constellation Research analyst Michael Ni said: "We’re entering a new era where data clouds and hyperscalers are racing to establish themselves as the dominant platform for AI-driven decision-making in their respective markets. The competition is no longer about warehouse performance—it’s about who owns the semantic layer, who governs the agent lifecycle, and who enables the next-gen data app ecosystem. With Lakebase, Agent Bricks, and Unity Catalog metrics, Databricks is asserting that ownership more broadly than ever before."

It's worth noting that JPMorgan Chase CEO Jamie Dimon said that data is still way harder than delivering on AI. It stands to reason that we're at an inflection point where agentic AI is really an extension of the data platform.

My inflection point believability on scale of 1 to 10: 9.

Agentic AI

"We're just starting to use agents," Dimon at the Databricks conference. See: JPMorgan Chase's Dimon on AI, data, cybersecurity and managing tech shifts

If JPMorgan Chase, which has a $18 billion technology budget, is just starting with AI agents where do you think the rest of the enterprises sit?

To be sure, we have multiple inflection points with agentic AI. Consider:

  • We're at an inflection point of vendor marketing about AI agents.
  • We're at an inflection point for AI agent standards and solving issues where these automated workers can share data and collaborate.
  • We're at an inflection point where enterprises are going to swallow consumption models from SaaS vendors.
  • And we're at an inflection point where every board wants an AI agent strategy to automate work.

Anthropic CEO Dario Amodei laid out the agentic AI dream. Speaking during Databricks Data + AI Summit, he said humans will go from conversing and collaborating with AI agents to developing fleets.

"An agent fleet is where a number of agents do things for you, and you are essentially the manager of the agents. It'll go from agent fleets to agent swarms, just when each agent in the fleet itself employs something. And so the human engineer is sitting the top hierarchy, like they're managing an organization or a company, and they still need to intervene. They still need to set direction," said Amodei.

Mainstream adoption? Not yet, but think 2026 for some scale. Biggest issue now is getting these agents to work well and everything we’re hearing is there’s a lot to do before going production. We've covered this topic plenty so let's move on.

My inflection point believability on scale of 1 to 10: 7.

Superintelligence

If you read OpenAI CEO Sam Altman's missive this week, you have a good feel for the vision even though it's a little murky.

We're at an inflection point for bold statements about AI.

"In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else," said Altman.

This superintelligence thing will be great for humanity—or not. "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before," said Altman. "We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big."

All we have to do is get alignment on what society wants from AI systems and democratize the access.

My inflection point believability on scale of 1 to 10: 2. Why? Society is in no place to reach consensus on something like AI superintelligence. Silicon Valley will deliver superintelligence and apologize later.

Apple is missing the AI revolution and peak Apple has passed

Given the ho-hum reaction to Apple's WWDC 2025 and lack of Apple Intelligence progress, it appears that Apple is treading water before a downswing. Like many large tech vendors, Apple appears to be missing the AI curve.

Craig Federighi, SVP of Software Engineering at Apple, said during the WWDC keynote that "we're continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year."

And with that confession at the altar of AI, Federighi and his band of executives spent the rest of the keynote talking about the redesign of its operating systems across devices called Liquid Glass.

It remains to be seen whether Apple can catch up in AI and become more than a mere vessel for other innovators. Ben Thompson at Stratechery said Apple retreated into the familiar. Others complained that Apple just recreated Windows Vista. VisionOS 26 was interesting though.

Either way, Apple's WWDC and product cycles have lost the buzz, but I'd hold off on the obit. One thing is clear: Apple has the resources to miss the AI curve to some degree and milk services revenue until the company figures it out. In its latest quarter, Apple generated $24 million in operating cash flow and returned $29 billion to shareholders and had more than $28 billion in cash.

My inflection point believability on scale of 1 to 10: 9. The problem for Apple is the curve is headed in the wrong direction. We've passed peak Apple.

Data to Decisions Chief Information Officer

The problem with Meta chasing superintelligence

The problem with Meta chasing superintelligence

Meta has hired Scale CEO Alexandr Wang to oversee its AI efforts as it pursues superintelligence. With the worst kept secret in Silicon Valley out of the way, it's time to ponder one massive, nagging question: Can an effort that may depend on social media data really be superintelligent?

One of the tried and true axioms is "garbage in, garbage out," or GIGO. Any system that has low quality input is going to give you garbage. Let's face it there is nothing "super" about social media and spare me on the "intelligent" argument.

So now, Mark Zuckerberg and Meta are revamping the AI strategy via the Scale AI investment and reportedly building an AI dream team. Wang reportedly is just the start of this AI supergroup. Meta has a bit of envy about big statements from the likes of OpenAI's Sam Altman and Anthropic's Dario Amodei.

And just in case you think GIGO is so yesterday just check out TechCrunch's tale on the Meta AI app, which just might be a "viral mess."

Meta's reported $14.3 billion investment in Scale AI, which is now valued at $29 billion, is one expensive acquihire. Maybe this bold move by Meta works, but I'm still stuck on GIGO. The secret sauce to any superintelligent model is going to be the proprietary data. In Meta's case that's Facebook, Instagram and WhatsApp even if it's just a small subset of overall training data. If Meta's Llama models are all about the same data every other model uses there's no value add. 

For what it's worth, I have the same GIGO concerns about Grok no matter how it seems to impress me with its responses. Why? I know there's X data in there somewhere. 

Wang said the opportunity to lead Meta's AI efforts was a once in a lifetime opportunity.

Scale AI will name Jason Droege interim CEO. Droege has strong background and it wouldn't be surprising if Scale AI is the ultimate winner in the end. Wang noted that Meta's investment will be distributed among shareholders and vested equity holders.

Data to Decisions Chief Information Officer

Adobe's AI strategy, monetization 'feels really good right now'

Adobe's AI strategy, monetization 'feels really good right now'

Adobe said its various AI offerings are driving usage and monetization as the company delivered better-than-expected second quarter results.

CEO Shantanu Narayen said AI is becoming a "nice tailwind" for the business and adoption as customers either pay for higher-tier plans for AI or buy individual features.

Narayen breaks down the AI effect as "AI influence revenue"--innovation that drives usage and higher subscription revenue in products like DX, Acrobat and Creative Cloud--and direct revenue from a standalone Firefly app, Creative Cloud Pro and GenStudio.

Adobe has laid out a strategy where it is targeting business professionals and consumers as well as creative and marketing pros. The approach gives Adobe a well-diversified customer base.

Narayen explained:

"The AI influence revenue is already in the billions because that speaks to the value that people are getting across both our DX products, Acrobat products as well as the Creative products. So across the board, there's no question that AI is a nice tailwind as it relates to adoption. And we also said we're tracking ahead of the $250 million of ARR."

The immense opportunity is all ahead of us. And as we get this entire offering that we keep talking about, which is Acrobat; Express; Firefly single app; Creative Cloud Pro, which includes Firefly; GenStudio; and the AEP and apps, each one of them, we think, has a tremendous opportunity ahead of us. So it's very early in terms of the AI monetization, but we're very advanced in terms of how much innovation we've delivered. And so it feels really good right now."

That AI effect is showing up in Adobe's earnings report, which appears to have satisfied Wall Street. In most quarters, Adobe reports strong financials and gets walloped afterward due to monetization worries or fears about trailing in AI.

Rest assured those worries are still there. Adobe executives were repeatedly asked about competition from Meta and other model disruptors to Creative Cloud, Adobe Express and the rest of the portfolio.

David Wadhwani, general manager of Adobe's Digital Media business, said the company is moving key apps like Firefly to mobile and it is driving its model based on data and the web journey optimization it offers enterprise. The lessons from Acrobat's AI upsell are being applied elsewhere. "We onboarded into 8,000 new businesses in the quarter with Express," he said.

Narayen said the Adobe strategy has been to drive adoption of key AI tools like Firefly and Acrobat AI Assistant and then drive monetization. Ultimately, Creative Cloud Pro will have the bundle of Adobe's best AI features. The Adobe CEO also said Adobe is seeing strength in its marketing offerings too and is automating workflows. "The North Star is the combination of creativity and productivity driving growth for us," he said.

The numbers

Adobe reported second quarter earnings of $1.69 billion, or $3.94 a share, on revenue of $5.87 billion, up 11% from a year ago. Non-GAAP earnings were $5.06 a share. The results and outlook topped Wall Street estimates. 

  • Digital Media revenue was up 11%.
  • Digital Experience revenue was up 10%.
  • Business professional and consumer group subscription revenue was up 15%.
  • Creative and marketing professional group revenue was up 10%.

As for the outlook, Adobe said third quarter revenue will be between $5.87 billion and $5.92 billion with non-GAAP earnings of $5.15 a share to $5.20 a share.

For fiscal 2025, Adobe projected revenue of $23.5 billion to $23.6 billion with non-GAAP earnings of $20.50 a share to $20.70 a share.

 

Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Data Officer

AMD eyes AI inference gains with new Instinct accelerators, GPU, open rack systems

AMD eyes AI inference gains with new Instinct accelerators, GPU, open rack systems

AMD launched new Instinct MI350 Series accelerators, previewed Instinct MI400 Series GPUs and outlined its next-gen AI rack systems that integrate the company's stack.

The upshot is that AMD is going after AI workloads as inferencing has ballooned and models for multiple use cases have proliferated.

Speaking at AMD's Advancing AI 2025 conference, CEO Lisa Su said training is critical to developing for developing models but there's a big picture. "We are seeing an explosion of models, especially models for specific uses such as coding, healthcare and finance," said Su. "Over the next few years we expect hundreds of thousands, and eventually millions of purpose-built models each tuned for specific tasks, industries or use cases."

That selection of models and use cases will drive compute requirements.

Here's what the company launched at its event:

  • AMD launched its Instinct MI350 series, which has 4x more performance than its predecessor and up a 35x gain in inferencing performance. Those AI accelerators come in air-cooled and direct liquid-cooled options.

A close-up of a computer

AI-generated content may be incorrect.

  • AMD previewed its upcoming Instinct MI400 Series GPUs with a 10x performance increase.
  • The next-gen Helios AI rack infrastructure was also previewed. That rack system is optimized for AI workloads and integrates M1400 GPUs, EPYC CPUs and Pensando NICs.
  • AMD is also building out its software stack led by its ROCm platform. The company touted 3.5x inference gains in its upcoming ROCm 7 release.
  • ROCm can run more than 1.8 million Hugging Face models.
  • The company launched its AMD Developer Cloud with ROCm and AMD GPU access.
  • AMD also touted traction for its Instinct GPUs with cloud providers such as AWS, DigitalOcean, Meta, Microsoft and Oracle Cloud. System giants such as Dell, HPE and Supermicro are also building out with AMD Instinct MI350 Series GPUs.
  • AMD's Su laid out a roadmap through 2027 that not only includes processors but open rack-scale designs that'll be used by hardware partners. AMD recently sold the manufacturing operations of ZT Systems.

A black server tower with blue lights

AI-generated content may be incorrect.

With the launches, AMD is making a play for inference workloads. Nvidia is best known for training workloads, but also has a big footprint in AI inferencing. AMD is looking to be a counterweight to Nvidia and also gain share as the AI total addressable market expands.

Indeed, AMD brought along some powerful references including Meta deployed Instinct MI300X for Llama 3 and Llama 4 inference. OpenAI CEO Sam Altman touted its AMD partnership as did Oracle, HUMAIN, Microsoft, Cohere and others.

Data to Decisions Tech Optimization AMD Chief Information Officer