Results

Databricks natively integrates Google Cloud Gemini models

Databricks natively integrates Google Cloud Gemini models

Databricks said that Google Cloud's Gemini models will be available natively within its Databricks Data Intelligence Platform. Databricks also said that it extended a partnership with Microsoft Azure.

The deals were announced at Databricks Data + AI Summit.

According to Databricks, Gemini models will be available for AI agent use cases within Databricks. The companies said their expanded partnership will enable customers to deploy Google Gemini 2.5 models without moving data.

Gemini models will be available to Databricks customers directly through SQL queries and model endpoints without data duplication or integrations. Enterprises can pay for Gemini usage through the Databricks contract.

Key points about the Google Cloud-Databricks partnership:

  • Gemini models will be available and billed through Databricks.
  • Gemini models can be used to create AI agents using datasets within Databricks.
  • Databricks Unity Catalog will handle governance and compliance.

Separately, Databricks said it expanded a partnership with Microsoft on Azure Databricks, which has been available since 2018.

Microsoft and Databricks have rolled out a series of native integrations between Azure Databricks, Azure AI Foundry and Microsoft Power Platform as well as SAP Databricks on Azure.

Data to Decisions databricks Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Oracle Cloud’s annual revenue run rate exiting Q4 nears $27 billion

Oracle Cloud’s annual revenue run rate exiting Q4 nears $27 billion

Oracle's cloud infrastructure business posted revenue growth of 52% in the fourth quarter as its results were better than expected.

The company reported fourth quarter earnings of $1.19 a share on revenue of $15.9 billion, up 11% from a year ago. Non-GAAP earnings were $1.70 a share.

Wall Street was looking for Oracle to report fourth quarter earnings of $1.64 a share on revenue of $15.59 billion.

Perhaps the biggest takeaway in the quarter is that Oracle's infrastructure-as-a-service unit is closing in on the company's SaaS revenue. In the fourth quarter, Oracle Cloud Infrastructure (OCI) had revenue of $3 billion, up 52%, and the SaaS unit delivered sales of $3.7 billion, up 12%.

Oracle's cloud revenue including software and infrastructure was $6.7 billion, up 27% from a year ago.

For fiscal 2025, Oracle reported earnings of $12.4 billion, or $4.34 a share, on revenue of $57.4 billion, up 9% from a year ago.

Oracle CEO Safra Catz said fiscal 2025 was a good year, but fiscal 2026 should deliver "dramatically higher" revenue growth rates. She noted that fiscal 2026 total cloud growth rate should be more than 40%, up from 24% in fiscal 2025. "Oracle is well on its way to being not only the world's largest cloud application company—but also one of the world's largest cloud infrastructure companies," she said.

CTO Larry Ellison said that "multicloud database revenue from Amazon, Google and Azure grew 115% from Q3 to Q4." He added that OCI has 23 multicloud data centers live with 47 on tap in the next 12 months. Ellison added that triple-digit revenue growth for multicloud should continue.

Oracle's cloud business is on an annual run rate approaching $27 billion. Google Cloud is more than $49 billion on an annual run rate. AWS is more than $117 billion. Microsoft Cloud's annual revenue run rate was more than $169 billion including sales across all units. The annual revenue run rate for Microsoft Intelligent Cloud, which includes Azure, is more than $107 billion.

Highlights from the earnings call include:

  • Catz said fourth quarter capital spending was $9.1 billion with the "vast majority of our CapEx in revenue generating equipment that was going into data centers and not for land or buildings."
  • In fiscal 2026, Catz said capital expenditures will be more than $25 billion "to meet demand from our backlog."  
  • Cloud infrastructure revenue growth will grow more than 70% in fiscal 2026, said Catz. 
  • For the first quarter, Oracle projected total cloud revenue growth of 26% to 30%. Non-GAAP earnings for the quarter will be between $1.44 and $1.48 a share. 
  • Ellison said Oracle has developed an integrated AI agent suite for ERP, supply chain, manufacturing, human resources, customer engagement and industry applications. Ellison said Oracle's database has been critical to building out AI agents. 
  • "These other companies say they have all the data, so they can do AI really well. They can build all these AI agents on top of all of that data," said Ellison. "The only problem with that statement is they don't have all the data we do. We have most of the world's valuable data. The vast majority of it is in an Oracle database. Our applications take all of your application data and make that data available to the most popular AI models."
  • Ellison said Oracle's suite approach to applications and AI agents is resonating. "Companies don't really enjoy buying applications from five different vendors and then making all of those applications work together," said Ellison. "We're seeing a lot of companies buying those basically saying, I'm going to go all Oracle. I'm going to buy the complete Oracle suite."
  • "Our intent is to use our biggest customers, a one stop shop by the entire suite to run their enterprise from us. And that gets rid of a lot of headaches. Everything is in the same database. Everything comes with the same AI data platform. With it, all the analytics are there. You don't have to do the system integration," said Ellison.  
  • Oracle still has more demand than it can meet. "I am still in a position where our supply is not meeting our demands," said Catz. "We actually currently are still waving off customers from or scheduling them out into the future so that we have enough supply to meet demand."

Constellation Research's take

Holger Mueller, an analyst at Constellation Research, said:

"Oracle is doubling down on IaaS revenue and benefitting from the popularity of the multicloud offering of the Oracle Database. This marks the first quarter where Oracle even accepted a negative cash flow. It is clear where Safra Catz and Larry Ellison see the opportunity.

In the concluded fiscal year alone the CapEx tripled from $7 billion to $21 billion in Q4. For a long time Oracle settled CapEx at 50% of free cash flow - no more. Last fiscal year's  accumulated CapEx was about $27 billion, and Q4 alone was $21.2 billion. 

How long can Oracle afford that investment level into data centers? But Oracle is more about infrastructure than ever before - and the only vendor who was relevant in the 1990s to make that transition. It will be key to show growth in regions (all have grown YoY) beyond its Americas region. Many international data centers are just opening now."

 

Data to Decisions Tech Optimization Oracle Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

JPMorgan Chase's Dimon on AI, data, cybersecurity and managing tech shifts

JPMorgan Chase's Dimon on AI, data, cybersecurity and managing tech shifts

JPMorgan Chase CEO Jamie Dimon said artificial intelligence shouldn't be a part of the technology org since it impacts all of the business. Dimon also gave his take on data, cybersecurity, management and technology shifts.

Speaking at the Databricks Data + AI Summit, Dimon in an interview with CEO Ali Ghodsi said:

"We took AI and data out of technology. It's too important and technology does a great job and a deep partner. But we put AI at that management table. Dr. Manuela Veloso, Head of JP Morgan Chase AI Research reports to me and our president. Are we doing enough? Are we doing it right? There will be no job, no process, no function that won't be affected by AI--mostly for the positive. It's about getting all of the people who run these businesses to understand the power of it."

JPMorgan Chase has an $18 billion IT budget and Dimon added that data is everything. "We buy and sell 3 trillion of securities every day," said Dimon.

Dimon said the head of AI is at every single meeting he has with management teams so the bank can stay on top of things. "We have investments in 100 different companies out here. We're testing and learning," he said.

The results so far have been impressive.

Other takeaways from Dimon's talk:

Starting in AI. "In 2012, we first used Palantir. And I remember sitting down with the Palantir people and going through this AI thing, saying, Holy Christ. This is unbelievable. So we, right after that, start our own department. In 2014, we hired Veloso from Carnegie Mellon. We now have 600 actual end use case and that number will probably double next year."

The data foundation. JPMorgan Chase's Dimon, a long-time Databricks customer, said the data foundation is critical. "Data is the hardest part. It isn't the AI or machine learning," said Dimon. "Getting the data in the form that's usable is the hard part."

AI isn't deflationary yet. Dimon said JPMorgan Chase is spending $2 billion on AI and getting returns, but it's too early to say AI is deflationary.

"The chips are getting faster and cheaper, and maybe optical chips to get better down the road. But power costs aren't going down. The cost of land isn't going down. The cost of structures isn't going down. You know, the cost of inferencing is dropping dramatically, and you guys will find a million ways reduce costs. My own view is, in the next couple of years, we're gonna be spending a lot more money. We need more lumber and more cement and more steel and more grids and more gas plants. We will be deflationary, just not quite yet."

Technology shifts. Dimon said technology has always changed humanity. "I've always had technology at the table and part of the management team. The tech people have to be business literate. They have to understand your problems," said Dimon. "Business people need to be tech literate. I don't have to understand how lithium battery works. I don't have to understand exactly how your super agents work. I have to understand what they can do so I can deploy it. I tell people we're going to be the best at AI--large models, small models, this cloud, that cloud. Just use the technology to do a better job or you'll be left behind."

Cybersecurity. Dimon said cybersecurity is "the thing that scares me the most." "People are now using agents to try to penetrate major companies," said Dimon. "The bad guys are already using AI and agents. The cyber warfare is here in our computers, skies, satellites and wireless. I don't think a lot of Americans realize."

Dimon added that cybersecurity is a public-private partnership.

"We often inform the government before they know what's going on with certain things about North Koreans or Chinese actors out there. I remember President Obama asked me years ago, what do I do if a bank went down. I said you should have to have a bank holiday. You would have no choice, but to try to recover the data. We all have backed up data and all the things like that. But if I had to tell you all how many times the failsafe systems didn't work in my life, it would be almost every time. It didn't always anticipate what caused the problem and maybe AI will help us do a better job of that."

Management. Dimon was an early mover on return to office planning and walked through his approach to managing and the goals of having people in person at the office. A few quotes:

  • "You have to go out of your way to get the best of people. And it's amazing what it does for a country, university and a company."
  • "You should fire the assholes. It only takes a few of them to destroy a meeting, and sometimes those assholes are customers. I have fired customers because they're just so rude to our people."
  • "You gotta have a little bit of grit. It's hard. You know, these jobs are hard. You have to, you know, the answers to problems found. So my general reaction when something's going wrong, I get all the people the room, we work it over and over and over and over. I'm not guessing. And usually the answer is found."
  • "It's hard to do management by Hollywood Squares, because I can have real honest conversation with you. I'm sitting in front of you. I didn't realize that people are looking at their phones rather doing you. If your notifications are coming up, your emails are coming up, you're not paying attention. You know, when you're with me, you get 100% of my attention. 100% of the time I've done the whole pre read. You have to have certain disciplines, or you lose. It's how management teams have to work."

A presidential run? "I'd get all the rich white people to vote for me, but I am a banker. I'm 69 and I always say I would love to be president, but you have to anoint me. I've never run for office. I don't have that skill. Now maybe if you tell me what to do, what to say, and where to go, but I've never done that, so I think the answer is no. I will do anything to help the country," said Dimon.

 

New C-Suite Data to Decisions Future of Work Big Data Chief Executive Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

CR CX Convos: Live from PegaWorld 2025 with Tara DeZao

CR CX Convos: Live from PegaWorld 2025 with Tara DeZao

AI isn't about replacing marketers - it's about empowering them. Constellation analyst Liz Miller sits down with Pegasystems product marketing whiz Tara DeZao to discuss marketing transformation through AI partnerships and ushering in the next wave of collaborative CustomerExperience.

Key takeaways:

📌 AI helps overcome the 'blank page' challenge
📌 Authenticity remains at the heart of great marketing
📌 Decisioning trumps data overwhelm
📌 Customer journeys need orchestration, not rigid paths

Watch the full conversation to learn more!

On CR Conversations <iframe width="560" height="315" src="https://www.youtube.com/embed/utqIrUuLpUo?si=iouJplHL0EGpYQ7v" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Databricks launches Mosaic Agent Bricks, Lakeflow Designer, Lakehouse

Databricks launches Mosaic Agent Bricks, Lakeflow Designer, Lakehouse

Databricks launched Mosaic Agent Bricks, a workspace for creating AI agents that are production ready, accurate and cost efficient, Lakeflow Designer and Lakehouse, a transactional database.  

Agent Bricks advances the Databricks approach with Mosaic AI, which can build agent systems delivering domain-specific results, and aims to move AI agents into production. Agent Bricks will automatically generate domain-specific synthetic data and task-based benchmarks.

Databricks CEO Ali Ghodsi, said Agent Bricks is a "new way of building and deploying AI agents that can reason on your data." Databricks kicked off its Data + AI Summit in San Francisco.

Databricks said Agent Bricks will help build scalable AI agents that don't hallucinate, evaluate what's a good result and build a synthetic data set to mirror customer data. Agent Bricks are auto optimized.

With Agent Bricks, customers will be able to describe a high-level problem and Databricks will create LLM judges, generate synthetic data and auto optimize to create a grounding loop.

Databricks cited Astra Zeneca and Hawaiian Electric as early adopters of Agent Bricks that moved from point-of-concept to production in minutes from days.

By taking the guesswork out of creating production AI agents, Databricks is looking to scale agentic AI as well as drive consumption of its data platform.

Key points about Mosaic Agent Bricks:

  • Agent Bricks generates task-specific evaluations and LLM judges to assess quality.
  • Synthetic data is created that looks like customer data to supplement learning.
  • Agent Bricks uses multiple techniques to optimize agents.
  • Customers can balance quality and cost for agent results.
  • Use cases for Agent Bricks includes information extraction, knowledge supplementation, customer LLM agent and multi-agent supervision.

Databricks' announcements landed a week after Snowflake Summit 2025.

A common theme from both Databricks and Snowflake was that data platforms and AI are increasingly connected and that database technology was built for a different era. Both Databricks and Snowflake have doubled down on Postgres as a base for new AI applications. The general idea is to get data to AI applications in the lowest cost and efficient manner.

In addition, Databricks is looking to combine data and AI so enterprises can define objectives using natural language and then the platform handles the rest of the process--data prep and features; build models with fine tuning; deploy tools, retrieval models and agent sharing; evaluation (both automated and human; and governance and modeling. Databricks' big argument is that data intelligence needs to touch every application with analytics, AI and database.

Databricks also launched Mosaic AI support for serverless GPUs and MLflow 3.0, a platform for managing the AI lifecycle.

Lakeflow Designer

Separately, Databricks launched Lakeflow Designer, a no code to code first pipeline so builders have a common language.

Lakeflow Designer is backed by Lakeflow, which is now generally available and has no-code connectors that can create pipelines with a single line of SQL. Lakeflow Designer features no-code ETL with scale, access control and AI support.

Key items about Lakeflow Designer include:

  • Lakeflow Designer has a drag-and-drop UI so business analysts can build pipelines as easily as data engineers.
  • Lakeflow Designer is backed by Databricks Lakeflow, Unity Catalog and generative AI features.
  • Lakeflow Designer will be launched in private preview.

Constellation Research analyst Michael Ni said:

"This isn’t just about scale—it’s about unlocking the 90% of questions that never make it to engineering. From campaign lift tracking to territory planning, Lakeflow Designer lets business teams define and ship data products using no/low-code tools that don’t get thrown away. Lakeflow Designer is the Canva of ETL: instant, visual, AI-assisted—yet under the hood, it’s Spark SQL at machine scale. The business analyst designs, and the data engineers can review and tweak collaboratively with the analyst. The engine industrializes it with full transparency and trust."

In addition, Lakeflow and Lakeflow Designer will rhyme with Snowflake's OpenFlow. "Lakeflow and OpenFlow reflect two philosophies: Databricks integrates data engineering into a Spark-native, open orchestration fabric, while Snowflake’s OpenFlow offers declarative workflow control with deep Snowflake-native semantics. One favors flexibility and openness; the other favors consolidation and simplicity," said Ni. 

Databricks eyes transactional data with Lakebase

Databricks also announced Lakebase, a transactional database engine where data is stored in low-cost lakes for easy access to AI applications.

Lakebase is Databricks effort to address what databases need to do for AI applications. Databricks argued that Lakebase is designed for AI due to the following characteristics:

  • Lakebase has separate compute and storage, which creates very low latency, high queries per second and 99.999% uptime.
  • The Lakebase is built on open source Postgres that supports community extensions.
  • Lakebase is built for API since it launches in less than a second, gives customers the ability to pay for what they use and can manage changes well.
  • Lakebase runs on Postgres OLTP Engine and shares DNA with Neon and has fully managed pipelines for data sync.

Databricks also announced the following:

  • Lakebridge, a tooling set that's free and aimed at predictable migrations. Lakebridge features a warehouse profiler, code converter, data migration and validation with support for more than 20 legacy data warehouses.
  • Databricks Apps, which are governed data intelligence apps on Databricks, are generally available.
  • Unity Catalog Metrics, which defines metrics in one place and provides dashboards and notebooks across an enterprise. Unity Catalog Metrics also works with AI/BI Genie to promote novel questions in certified semantics. 
  • Databricks One, a version of Databricks designed for business teams that's in public preview. Databricks One has an intuitive customer experience, simple administration and Unity Catalog.
  • Community Edition: Databricks Community Edition was updated and includes most features. Developers can learn and experiment with data and AI use cases and Databricks is spending $100 million on programs for education.

Ni added:

"We’re entering a new era where data clouds and hyperscalers are racing to establish themselves as the dominant platform for AI-driven decision-making in their respective markets. The competition is no longer about warehouse performance—it’s about who owns the semantic layer, who governs the agent lifecycle, and who enables the next-gen data app ecosystem. With Lakebase, Agent Bricks, and Unity Catalog metrics, Databricks is asserting that ownership more broadly than ever before."

Data to Decisions databricks Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Nvidia adds AWS, Microsoft Azure to DGX Cloud Lepton marketplace

Nvidia adds AWS, Microsoft Azure to DGX Cloud Lepton marketplace

Nvidia expands its DGX Cloud Lepton GPU marketplace with the addition of AWS and Microsoft Azure to its roster of providers.

DGX Cloud Lepton is a marketplace that unified Nvidia GPU resources across providers and regions. The marketplace is also integrated with Nvidia's AI stack for microservice containers, multiple large language models and management.

At Nvidia GTC Paris, the company said its global compute marketplace, launched at Computex, is adding a bevy of EU providers including Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI. Nvidia CEO Jensen Huang said DGX Cloud Lepton is "connects developers to GPU compute powering a virtual global AI factory."

AWS and Microsoft Azure will be the first large-scale providers contributing Nvidia Blackwell and GPUs to DGX Cloud Lepton. CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda and Yotta Data Services are already participating in DGX Cloud Lepton.

In addition, Nvidia said that Hugging Face will roll out Training Cluster as a Service integrating with DGX Cloud Lepton so researchers and developers can tap into GPU compute. Mirror Physics, Project Numina and the Telethon Institute of Genetics and Medicine will be among the first Hugging Face customers to access Training Cluster as a Service, which uses DGX Cloud Lepton for compute.

Nvidia also said it is working with European venture capitals firms Accel, Elaia, Partech and Sofinnova Partners to provide up to $100,000 in DGX Cloud Lepton credits to startups.

Enterprises with early access to DGX Cloud Lepton include Basecamp Research, EY, Outerbounds, Prime Mente and Reflection.

Also see:

Data to Decisions Tech Optimization nvidia Chief Information Officer

Nvidia outlines EU AI expansion, ecosystem, sovereign models

Nvidia outlines EU AI expansion, ecosystem, sovereign models

Nvidia outlined plans to scale AI factories and research hubs in Europe, expanded partnerships with Schneider Electric and Siemens and expanded model choices for sovereign AI and NIM Microservices.

Those high-level headlines from Nvidia GTC Paris were part of a broader stream of updates for the AI market in Europe, where Nvidia has more than 1.5 million developers. Nvidia also announced that European enterprises are adopting agentic AI including Novo Nordisk, Siemens, Shell, BT Group, SAP, Nestle, L'Oreal and BNP Paribas. The company also touted adoption of its Nvidia Drive autonomous vehicle platform at Volvo, Mercedes Benz and Jaguar as well as quantum computing efforts in the region.

Nvidia Jensen Huang said during the GTC Paris keynote:

"Europe has now awakened to the importance of these AI factories, and the importance of the AI infrastructure.  I'm so delighted to see so much activity here. This is just the beginning."

Dion Harris, Senior Director of HPC and AI Factory Solutions at Nvidia, said:

"We're deeply integrated with upskilling and education, working with all of the top higher education and research institutions and the global systems integrators, Europe is poised to be a powerhouse in this new industrial revolution. The only thing is missing is infrastructure. Today, every nation needs to build AI infrastructure, and every company needs to build an AI factory."

Here's a breakdown of what Nvidia announced:

  • Schneider Electric and Nvidia expanded a partnership designed to accelerate the deployment of AI factories. The two companies will collaborate on reference designs, simulation, design and layout, infrastructure and architecture for AI factories. The two companies will also look to scale production of cooling systems in Europe and 800 volt direct current architectures.
  • A roster of European supercomputing centers and cloud service providers building Nvidia-based AI infrastructure.
  • European Nvidia AI Technology Centers in Finland, Sweden, Germany, UK, France, Italy and Spain. Nvidia is working with Italy to advance its sovereign AI efforts.
  • DGX Cloud Lepton integration with Hugging Face Training Cluster as a Service. Nvidia also said it is working with EMEA model builders and offering sovereign AI models via Perplexity Pro. According to Nvidia, each country in EMEA needs strong models that reflect each nation's unique language and culture and operates in region.
  • Nvidia is working with Mistral AI to build a cloud platform powered by 18,000 Grace Blackwell systems.
  • As part of that expanded model selection, Nvidia said NIM Microservices will have access to more than 100,000 open and custom models via Hugging Face public and private LLMs.
  • NeMo AI agent additions including AI Safety Blueprint, Data Flywheel and Agentic AI Toolkit. Nvidia added that NIM and NeMo will be integrated into SAP Business AI.
  • Siemens and Nvidia will expand their partnership to accelerate AI capabilities in manufacturing with a focus on product design and engineering, production optimization, operational planning, digital twins and industrial edge computing. Nvidia's various libraries for CUDA X, RTX and Omniverse will be integrated into Siemens product portfolio.

  • Nvidia also announced how its GB200 NVL72 system is powering quantum computing workloads and simulations with European enterprises and research hubs.

Also see:

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI: New models, and chasing Altman’s superintelligence dream

OpenAI: New models, and chasing Altman’s superintelligence dream

OpenAI released o3-pro for ChatGPT Pro and Team users in what it calls its most capable model yet as it cut the prices for o3 by 80%. The moves come as OpenAI CEO Sam Altman ponders 2030 where there limitations of energy will lead to AI superintelligence that's almost free.

That's a mouthful, but Altman and OpenAI are arguing that we're at an event horizon, an inflection point and probably a lot of revenue growth. For enterprises, the takeaway is that OpenAI expenses may be declining in exchange for volume. OpenAI's enterprise business surging, says Altman

Let's recap the headlines:

  • OpenAI dropped o3 pricing by 80%. For developers, o3 may not be the latest and greatest, but it'll be good enough for many use cases.
  • OpenAI is scaling as it breaks away from its Microsoft partnership. Reuters reported that OpenAI is going to use Google Cloud for compute. That move would give OpenAI a multi-cloud approach that should meet its needs better than an exclusive with Microsoft Azure.
  • OpenAI launched o3-pro. The company said: "In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy."

That barrage of headlines, however, are overshadowed by Altman's blog, which laid out his latest thoughts on AI superintelligence, energy consumption and how much a ChatGPT consumes in resources per query today.

The post is worth a read. Here are a few takeaways.

Energy will be plentiful. Altman: "In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else."

But that ChatGPT usage isn't killing the environment today. Altman: "As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)"

The self-reinforcing loops have already started and what was novel months ago is now routine. Altman: The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off."

Humans will adapt: Altman: "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big."

Altman does note the challenges. He said society will have to solve "the alignment problem" where we can guarantee "AI systems learn and act towards what we collectively want." Altman said society will also have to make sure superintelligence is cheap and not concentrated with any one person. The world needs to start a conversation about what the boundaries are and get aligned.

My take

  1. Altman's take that cost and scale will be solved is believable, but we can debate the timeline for sure. Can energy grids be revamped in 5 years?
  2. The concept that society is going to have a reasonable discussion about superintelligence and get alignment on what we call collectively want from AI is naive if not batshit crazy. Governments on a global basis barely function now and there's a shortage of consensus.
  3. Societal impacts are glossed over throughout the post. Altman's take that humans will adapt may apply to a sliver of the population.
  4. This quote made me chuckle: "In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes."
  5. This quote struck me as blasé: "We will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines."
  6. Either way, Altman's right that potentially wonderful and wrenching change is coming. Both can be true.
Data to Decisions Future of Work Innovation & Product-led Growth Chief Information Officer

CR CX Convos: Live from PegaWorld 2025 with Matt Nolan

CR CX Convos: Live from PegaWorld 2025 with Matt Nolan

Don't miss the latest CR CX Convo covering the future of marketing decisioning...

Tuning in LIVE from #PegaWorld2025, Constellation analyst Liz Miller and Pegasystems' Matthew Nolan discuss the evolution of #marketing beyond traditional campaigns. 

Key takeaways:

📌 Marketing is shifting from sales-driven to customer-outcome focused
📌 AI and decisioning are transforming how brands engage customers
📌 The goal: Create relevant, personalized experiences that truly matter

Marketers aren't just sending campaigns anymore - they're becoming strategic growth architects who leverage data and AI to drive meaningful connections.

Watch the full conversation!

Marketing Transformation Future of Work Innovation & Product-led Growth Chief People Officer Chief Marketing Officer Chief Digital Officer On CR Conversations <iframe width="560" height="315" src="https://www.youtube.com/embed/U6i2Z73n1sI?si=pKj4S69HEWlFy_a0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Cisco tunes network portfolio, gear for AI agents, introduces AgenticOps

Cisco tunes network portfolio, gear for AI agents, introduces AgenticOps

Cisco outlined a series of AI infrastructure, security products and software designed to support AI agents, hyperscale data centers and enterprises for various workloads.

The upshot from Cisco Live in San Diego is that the networking giant is reordering its stack for AI workloads.

Cisco outlined AgenticOps. The company said AgenticOps is its AI-driven approach to running operations including telemetry, automaton and domain knowledge. Cisco AgenticOps is powered by Deep Network Model, a network-focused LLM, and Cisco AI Assistant, which identifies issues, root causes and automates workflows.

The company also launched AI Canvas, an interforce for customer dashboards enables collaboration between network, security and dev operations to collaborate and optimized.

Cisco delivers strong Q3 amid AI infrastructure, security traction

Jeetu Patel, President and Chief Product Officer, Cisco, said: "As billions of AI agents begin working on our behalf, the demand for high-bandwidth, low latency and power efficient networking for data centers will soar."

Here's a look at what Cisco announced at Cisco Live:

  • Unified management of Cisco platforms including ACI, NX-OS and other systems with dashboards, policies and controls. Cisco launched the Unified Nexus Dashboard, which consolidates services across all services.
  • Cisco Intelligent Packet Flow, which steers traffic using real-time telemetry and congestion data across AI networks. The service has visibility across networks, GPUs and distributed AI jobs.
  • Cisco and Nvidia are unifying architectures and outlined their first technical integration of Cisco G200-based switches and Nvidia NICs. The companies also demonstrated Nvidia Spectrum-X Ethernet networking based on Cisco Silicon One.
  • The company expanded AI PODs to support Nvidia's release cadence. Nvidia RTX Pro 6000 Blackwell Server Edition GPU is available to order with Cisco's UCS C845A M8 servers. Cisco and Nvidia will work together on validated systems for the Cisco Secure AI Factory with Nvidia.
  • Cisco AI Defense and Cisco Hypershield are now included in the Nvidia Enterprise AI Factory validated design.
  • Cisco AI Defense can secure AI agents with open models and optimized with Nvidia NIM and NeMo microservices.
  • The company has embedded its security offerings into its networking gear. Cisco has embedded zero trust and observability into the network, added a new generation of firewalls (6100 Series, 200 Series) and tightened integration with its Splunk unit.
  • Cisco is bringing together Meraki and Catalyst into one unified management platform for next-gen wireless, switching, routing and industrial networks across all platforms.
  • ThousandEyes and Splunk are now integrated for network to application visibility.
  • Cisco launched new Cisco C9350 and C9610 Smart Switches for campus networks and 8100, 8200, 8300, 8400 and 8500 Secure Routers, which has integration with Cisco's security portfolio.
  • The company launched Cisco Wireless 9179F Series Access Points for campus networks.
  • Cisco rolled out a series of rugged switches for industrial AI use cases.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Tech Optimization cisco systems Chief Information Officer