Results

Lowe's eyes AI agents as home improvement companion for customers

Lowe's eyes AI agents as home improvement companion for customers

Lowe's is betting that AI agents powered by Google Cloud can become do-it-yourself home improvement companions and offer personalized customer experiences.

Neelima Sharma, SVP Omnichannel and Ecommerce Technology at Lowe's, will be a speaker at Google Cloud Next 2025. Sharma said tools like Google Cloud Agentspace can be the next evolution of customer experience.

"We continue to look at ways where we can build a stronger relationship with a customer," said Sharma. "If an agent is going to be part of that it will be taking machine learning, generative AI, and autonomy to become the home improvement companion for our customers."

Lowe's bet on Google Cloud to be a big part of its digital transformation, which has been chronicled at Constellation Insights. In 2020, Lowe's and Google Cloud expanded their partnership to focus on commerce, merchandising, supply chain and pricing as well as customer experience. The idea was to create a "channel-less" customer experience. See: Lowe's betting on AI to drive customer experience, optimize multiple processes (PDF) | Lowe's bets on AI, technology to navigate slowing demand

Since that expansion, Google Cloud has enabled Lowe's engineering platform as well as its Total Home Strategy.

Google Cloud envisions AI agents as a way to meld what are disparate functions inside a retailer. Agents will carry out specific tasks like marketing campaign creation and research, but the promise of agentic AI is working across functions.

The retail challenge

Carrie Tharp, vice president of solutions and industries at Google Cloud, said the retail industry is frequently under pressure due to shifting consumer expectations, market share shifts and uncontrollable developments like tariffs.

Given the pace of change in the industry it will be critical to have merchandising, advertising, supply chain and other functions connected by agents, said Tharp.

In addition, the customer journey in retail is increasingly longer in retail. "AI has become a critical engagement point," said Tharp. "Five years ago, I would have told you the average touchpoints in the consumer journey was six. We now see up to 10. Even worse we see consumers delaying decisions when they have too much information."

Sharma said Lowe's is looking to use AI, search and multimodal visualization capabilities to provide good experiences for the retailer's associates and customers.

"We started a journey with giving the best experience for our customers. We very quickly followed up with associates who are selling to our customers, and we modernized that. And now we've been focused on modernizing our corporate systems," explained Sharma. "That framework has been extended to AI as well. We have taken a very orchestrated approach towards AI to personalize our customer experience, help our associates while they sell and bring the power of all the data sources together in a conversational style."

The goal: Home improvement partner

Sharma said if Lowe's is successful with AI, it will be able to be "our customers home improvement partner" and help them find exactly what products and services they need for complex projects.

To be that home improvement partner, Lowe's needs to leverage AI as well as its data and machine learning foundation via Google Cloud.

Lowe's has laid the foundation to set up for agentic AI and enhanced experiences. Sharma said the company will be closely watching technology developments to see how it can advance its cause.

"We have more than 50 models in production today and deep learning capabilities including search models, recommendation models, sourcing, demand planning, performance pricing, promo and so on so forth. With generative AI, we are actually bringing more generated content on top of all these models," said Sharma.

Data to Decisions Future of Work Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity Tech Optimization Google Cloud Google B2B B2C CX Customer Experience EX Employee Experience business Marketing eCommerce Supply Chain Growth Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Customer Service Content Management Collaboration M&A Enterprise Service AI Analytics Automation Machine Learning Generative AI SaaS PaaS IaaS CCaaS UCaaS GenerativeAI ML LLMs Agentic AI Chief Information Officer Chief Data Officer Chief Customer Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Growth Officer Chief Marketing Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer Chief Supply Chain Officer Chief Information Security Officer Chief AI Officer Chief Analytics Officer

Healthcare leaders eye agentic AI as next frontier for clinicians, patients

Healthcare leaders eye agentic AI as next frontier for clinicians, patients

AI agents are likely to be adopted in healthcare as a way to provide patient guidance in any modality and carry out tasks like transportation, appointment scheduling and follow-ups.

Those were a few takeaways from two healthcare leaders during an industry panel at Google Cloud Next. Richard Clarke, Chief Data and Analytics Officer at Highmark Health, and Sameer Sethi, Chief AI Officer at Hackensack Meridian Health, outlined what their organizations have done so far with generative AI and the groundwork in place for agentic AI.

"We're very excited for AI agents directly interacting with our members and patients with guidance that is always on in whatever modality they wish," said Clarke.

Sethi was also upbeat about the promise of agentic AI. "Imagine a patient calling to schedule an orthopedic appointment and also needing a wheelchair, a ride, a pharmacy visit—seven different actions handled by one AI agent," said Sethi.

AI in healthcare is a hot topic given that the industry is facing pressure on multiple fronts. Aashima Gupta, Global Director of Healthcare Strategy Solutions at Google Cloud, said the industry is using generative AI to alleviate administrative burdens such as paperwork and searching medical records.

Gupta said that AI in healthcare is also transitioning from simple chatbots to single purpose agents and ultimately multi-agent systems across multiple departments. The overarching goals are to reduce the burnout in the field and improve patient care.

"GenAI has really evolved from a buzzword to a business essential," said Gupta. "We're seeing a paradigm shift in how we interact with healthcare. Agents represent a strategic opportunity to reimagine care and journeys, underscoring that conversation is becoming the new interface. Our customers are able to reimagine patient care and how it could be personalized for everyone."

With that backdrop, here's a look at some of the takeaways from Highmark Health and Hackensack Meridian Health.

Highmark Health

Adoption across the enterprise. Clarke said Highmark Health already has more than 14,000 of its 40,000 employees regularly using internal genAI tools built on Vertex AI and Gemini. Highmark Health inked a 6-year strategic partnership with Google Cloud in 2020 and the two companies have worked together on multiple projects.

Ambient intelligence. Clarke said ambient listening by AI during patient visits is an underrated breakthrough because clinicians can focus on the person, not the notetaking. "Everything in the ambient listening space has been a true gift to bring joy back to practice for many of our clinicians," said Clarke. This point about ambient listening has surfaced before with healthcare leaders.

Multi-modality matters. Clarke said the ability of models to deliver in multiple ways is critical. "We were stuck on a particular use case and when Gemini 2.0 came out, it kind of made us get over the hump," said Clarke.

The vision for agentic AI. Clarke said the promise of agentic AI rhymes with his ambient intelligence points. The big idea is that AI agents can interact with members and patients to provide guidance in multiple formats on demand.

Cost concerns so far. "There was some concern that our cloud costs would be challenged, but we just haven’t seen that," said Clarke.

Guardrails. Clarke said Highmark Health puts AI projects into shadow mode followed by assisted audit before going into production with automation. "We need logging, monitoring, and governance," he said.

Hackensack Meridian Health

Sethi said his organization's genAI strategy aims at delivering personalized experiences, addressing efficiency, reducing burnout and offering disease prediction and precision treatments.

Hackensack Meridian Health started on BigQuery and Looker with Google Cloud before moving to Vertex AI and Gemini.

He added that there are a few use cases for AI agents that are appealing. "We want to make sure we were focusing on the patient and our workforce," said Sethi, who noted the following use cases:

  • A nurse agent. "The nurse can sift through large amounts of data and instead of going through binders or PDFs, the agent provides that insight directly," said Sethi.
  • Patient agent. This agent would be able to string together multiple tasks that are required during discharge processes or care that usually involve multiple people.

Now that use case for agents isn't ready just yet, but Sethi noted that the innovation is happening at a fast pace. Sethi said that these agents would require guardrails and testing. "We built a whole test suite," he said. "Every time we receive feedback; we add that to our test library and test against it in future releases."

He added that implementing AI requires a heavy dose of process optimization and change management. "The biggest barrier in technology enabling is actually humans," said Sethi, adding that human-in-the-loop is critical to determining how processes can be automated. "Figuring out what should be agentic is the hardest part," he said. "It’s easy to create an agent but identifying the elements where a human has to come in—that's what takes time."

More on healthcare transformation

 

 

 

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Tech Optimization Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Cloud Next 2025: Agentic AI, Ironwood, customers and everything you need to know

Google Cloud Next 2025: Agentic AI, Ironwood, customers and everything you need to know

Google Cloud announced a bevy of capabilities and services designed to enable multi-agent systems for its customer base, which is increasingly integrating the company’s models, data platform and agent development tools into their stacks.

Google Cloud Next has kicked off with more than 32,000 attendees and more than 500 customer stories. With that backdrop here's a look at the key themes from Google Cloud Next in Las Vegas.

Customers and maturity

Google Cloud CEO Thomas Kurian's keynote includes Verizon, Honeywell, Intuit, Mattel, Reddit, Sphere Entertainment and McDonalds to name a few. Google Cloud is leaning into its impressive customer roster, targeting industries and adding expertise and services to its go-to-market engine. Google Cloud has leveraged its AI knowhow into a seat at the cloud vendor table and now is often an AI layer in multi-cloud environments.

Google Cloud CEO Thomas Kurian said enterprise customers have integrated AI tools into their products and service and are now demonstrating business value. "We have more than 500 customers speaking at Cloud Next, and we're thrilled that these customers will share the real value that they're seeing from the use of our technology. They span a broad range," said Kurian.

The customers cited by Kurian span a wide range of industries ranging from retail to healthcare to financial services. Google Cloud has also revamped its go-to-market approach to focus on industries, domain knowledge and use cases that drive returns. "We found the most important problems that customers care about and we have chosen to focus early on addressing them," said Kurian. "We've been fortunate that we made a bet early on with AI. The AI landscape has matured and we can offer many different pieces."

The showcase customer at Google Cloud Next was Sphere, which along with Google DeepMind, Google Cloud, Magnopus and Warner Bros. Discovery, reinvented the "The Wizard of Oz" with researchers, programmers, visual effects artist, AI engineers and archivists.

The "Wizard of Oz at the Sphere" effort revolved around using AI to recreate the 1939 movie to the Sphere and its sensory experience for a Aug. 28 degree. The companies developed a "super resolution" tool to turn celluloid frames from 1939 into high-definition imagery using Google Cloud models, AI outpainting and models to expand the scope of screens.

A circular object with people in it

AI-generated content may be incorrect.

Kurian and Sphere CEO Jim Dolan said the goal was to honor the integrity of the original film and extended it to a new format. At a private screening event and keynote, Kurian described it as "almost like you were told to do AI and your first project was your PhD thesis, not your undergraduate."

Dolan added that it was the first time "I didn't feel like a customer. I felt like a partner. That's why this worked."

I saw the output from the collaboration and it was striking how much AI didn’t screw up the original.

A screenshot of a movie

AI-generated content may be incorrect.

Google Cloud's AI hypercomputer, custom silicon and integrated stack

Google Cloud announced Ironwood, its 7th generation TPU designed for faster model training and more efficient inference. Ironwood is the headliner with 4x peak compute and 6x high bandwidth memory, but Google Cloud launched storage, inference and AI at the edge advances. Gemini will be on Google Distributed Cloud infrastructure.

Ironwood is focused on powering responsive AI models. Ironwood can scale up to 9,216 liquid cooled chips linked through Inter-Chip Interconnect (ICI) networking. Ironwood also gives developers the ability to use Google's Pathways software stack to combine tens of thousands of Ironwood TPUs.

Key items about Ironwood include:

  • For Google Cloud customers, Ironwood has two sizes for AI workloads--a 256 chip configuration and a 9,216 chip configuration.
  • When scaled to 9,216 chips per pod at 42.5 Exaflops, Ironwood supports more than 24x the computer power of the El Capitan supercomputer.
  • Ironwood has 192 GB per chip High Bandwidth Memory (HBM) capacity.

A close-up of a chart

AI-generated content may be incorrect.

For Google Cloud, integration between its hardware stack is critical. Kurian said Google Cloud's stack is meant to address training as well as inference. In addition to Ironwood, Google Cloud announced:

  • Hyperdisk Exapools, the next generation of block storage with up to exabytes of capacity and terabytes per second of performance per AI cluster.
  • Rapid Storage, a new Cloud Storage zonal bucket that improves latency and features 20x faster data access and 6TB/s throughput.
  • Cloud Storage Anywhere Cache, which reduces latency up to 70% with 2.5TB/s throughput. Anthropic is a big user.
  • A fully managed zonal parallel file system called Google Cloud Managed Lustre and Google Cloud Storage Intelligence for insights specific to a customer's environment.
  • A new GKE AI Inference Gateway to load balance inference requests.
  • Cloud WAN, which provides 40% lower latency and up to 40% savings. Cloud WAN ensures traffic targeting Google WAN enters and exits Google's high-performance network at the geographically closest point of presence.
  • Gemini on Google Distributed Cloud, which will run Gemini models on your on-premises infrastructure. Agentspace and Vertex AI will be available on Google Distributed Cloud.
  • Google Cloud also launched optimized software for AI training and inference including Cluster Director, Pathways, Google's internal machine learning runtime, and vLLM for TPUs, an efficient library for inference that can be used with TPUs across Compute Engine, GKE, Vertex AI and Dataflow.

Here's the overview of what's new on the infrastructure front.

Agent development tools and models

With Google Cloud Next, the company is making it clear that it’s a notable cog in multi-agent systems and development tools. The company launched an open source Agent Development kit and Agent Engine that supports MCP, connectors to many players in the enterprise stack and Agent2Agent, which is a communications standard so agents can communicate.

More details include:

  • Open Source Agent Development Kit and Agent Engine is designed as a combo to enable developers build multi-agent applications. It will be launched in Python and support MCP with other languages being added in the future.
  • More than 100 connectors to access enterprise data and use cases. These pre-built connectors are designed for systems like Salesforce, ServiceNow, Jira, SAP, UiPath, Oracle and others.
  • Agent2Agent enables communication across agent frameworks so agents can securely collaborate, manage tasks, negotiate UX and discover capabilities.

Of those items Agent2Agent (A2A) is the headliner. 

Combined with MCP, Google Cloud's A2A effort highlights how agent interoperability standards have gone from nothing to something viable in just 5 months. Google Cloud said A2A has support of more than 50 technology partners including Atlassian, Box, Intuit, Langchain, MongoDB, Salesforce, SAP, ServiceNow, UKG and Workday. Systems integrators across the board are supporting A2A. 

Google Cloud emphasized that A2A complements MCP because it focuses on AI agent collaboration regardless of their underlying technology. 

Not surprisingly, Google Cloud rolled out a series of models that'll be on Vertex AI. Gemini 2.5 Flash and Pro are designed for complex reasoning and the company added a series of next-gen offerings such as Veo 2.0, Lyria and Chirp 3.

But the most interesting item on the model front was Model Optimizer, an experimental service that applies the best AI model for use case and requirements using Vertex AI's evaluation service. Model Optimizer will allow customers to tailor what models are used to meet a variety of business objectives.

Here’s the rundown:

  • Gemini 2.5 Flash & Pro will be available on Vertex AI. Google Cloud also said that Veo 2.0, Lyria and Chirp 3 will bring new audio generation and understanding capabilities to the platform.
  • Open Source Agent Development Kit, a framework for building multi-agent systems.
  • Agent Engine, which enables customers to deploy agents from any framework to a fully managed runtime.

Agents everywhere

Agentspace is being built out. Google Cloud is using Agentspace to enable companies to build agents and use prebuilt ones too. Agents are being added to Google Cloud's Customer Engagement Suite, Data Cloud for engineering, data science and analytics tools, Security Suite and Workspace.

Google Cloud is ensuring that enterprises have multiple touchpoints to access and procure agents via Agentspace and its broader platform. The company launched:

  • Agentspace Agent Gallery, a curated set of agents from Google, customers and partners.
  • Customer Engagement Suite updates with Google AI to use AI to build AI agents with multimodal and human-like voices.
  • Use case specific agents such as Food Ordering AI Agent specifically designed for quick serve restaurants that operate with multiple languages. Google Cloud also announced Automotive AI Agent, which gives automakers the ability to create and deliver custom in-vehicle assistants.
  • Customer Engagement Suite gets conversational agents in a new unified console as well as AI Coach and AI Trainer to help humans upsell and prebuilt agents for flight booking, ticketing, appointments and shopping assistance.
  • Customer Engagement Suite is getting the ability to use Google's latest voice technologies so agents can sound like a human.
  • Google outlined Idea Generation Agent that runs a tournament of ideas for evaluation as well as Deep Research agent.
  • NotebookLM integration with Agentspace for natural language search results.
  • Developers will get Cloud Hub, which provides top insights across applications and infrastructure, agentic AI application capabilities in Firebase Studio, Code Assist Agents and new tools in Cloud Assist.
  • Data Cloud gets Data Agents tailored to users, Looker and AlloyDB integration with Agentspace, and LLM integration directly into BigQuery and AlloyDB.
  • On the security front, Google Security Operations will get an Alert triage agent to investigate alerts and respond. Google Cloud is also adding a malware analysis agent in Google Threat Intelligence.
  • Workspace will feature a series of agents and Gemini tools throughout the platform.
  • Google Cloud announced Google Workspace Flows to automate work with the help of AI agents.

Here’s a look at how Google Cloud’s agent stack comes together.

A screenshot of a computer

AI-generated content may be incorrect.

 

Data to Decisions Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Cloud, UWM partner as mortgage battle revolves around automation, data, AI

Google Cloud, UWM partner as mortgage battle revolves around automation, data, AI

United Wholesale Mortgage (UWM) said it will use Google Cloud for AI and analytics in an effort to automate more of the mortgage process.

The partnership, announced ahead of Google Cloud Next, highlights how the mortgage industry is looking to transform with first party data and automation and reload for an eventual rebound in the housing market. The Google Cloud partnership with UWM also sets up an interesting technology battle given that Rocket, built on AWS, has acquired Redfin as well as Mr. Cooper in recent weeks. Rocket's goal is to acquire first party data that fuels its models and cross-selling opportunities.

For Google Cloud, the UWM win gives it another big financial services customer.

UWM has a technology platform that includes ChatUWM that is a borrower personal guide through loan documentation and processes and Trac+, which manages title review, closing and disbursement during the mortgage process

Key points about the Google Cloud-UWM partnership:

  • UWM will integrate Google Cloud AI and machine learning in its lending platform that's used for underwriting automation, document processing and customer support.
  • UWM is using Google Cloud's Gemini Flash 1.5 to enhance underwriting automation.
  • The companies will explore Google Cloud infrastructure to scale capacity and enhance security.
  • Google Cloud and UWM said the two companies will leverage AI to personalize loan recommendations and identify the right products for a borrower.
  • The two companies will announce more technology integrations and products in the months to come.

The big picture

Rocket's acquisition spree (Redfin and Mr. Cooper) is notable because it's focused on buying the first party data that can be used in its models. However, there's an additional thread to consider here. Rocket covered why the acquisitions made sense based on 30 petabytes of data, but there are also good business reasons to make the purchases beyond training AI models. Rocket's acquisitions are based on industry consolidation and feeding the sales funnel too.

A diagram of a company's homeownership platform

AI-generated content may be incorrect.

The UWM-Google Cloud partnership has a similar ring to it and highlights how the battle between the No. 1 mortgage lender (UWM) and No. 2 (Rocket) is moving to AI and the cloud.

Here's what Ishiba said on UWM's fourth quarter earnings call:

"We continued to invest in cutting edge technology, including AI, investing in our people, and we're in the best position to capitalize on any change in the current market dynamics. There are about $2.5 trillion and growing in mortgage rates over 6%. And so it won't take much of a shift on rates for those loans to be in the money. No matter what happens to the market, we are focused on what we can control and making sure we are more prepared than our competition."

Holger Mueller, analyst at Constellation Research, said:

"The battle on which cloud your mortgage is being stored, serviced, analyzed and more is in full swing. As Google Cloud wants and needs to make progress to its larger cloud provider competitor and that happens with huge workloads. UWM is such a large workload and will become a substantial part of Google Cloud utilization, once all set and done. For UWM, it has a chance to use Google Cloud features and have both a leg up on the competition with a new implementation and new offerings. It can also build on the cloud platform that has 3-4 years of a lead over its competitors when it comes to what matters today, AI."

Data to Decisions Marketing Transformation Matrix Commerce New C-Suite Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Customer Officer Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM launches z17 mainframe, eyes AI workloads

IBM launches z17 mainframe, eyes AI workloads

IBM launched its IBM z17 mainframe that includes AI tools to process inference workloads, its Telum II processor and Spyre Accelerator as well as AI agents from its watsonx platform.

While many folks think of mainframes as creaky old systems designed to process transactions without downtime, the z systems are big business for IBM and a strong upgrade cycle. In addition, IBM has reinvented the mainframe a few times for cloud workloads and now AI.

IBM's bet is that the z17, which will be available June 18, can take on new workloads beyond transaction processing. IBM said the system can score 100% of their transactions in real time and process 50% more AI inference operations per day than the z16.

According to IBM, the z17 can handle more than 250 AI use cases including loan risk, managing chatbot services, supporting medical image analysis and curbing retail shrink.

For IBM, the z17 is also a showcase for its integrated stack of R&D, software, AI and hardware.

Components include:

  • Telium II processor, which has a second-gen on-chip AI accelerator with a 1 millisecond response time.
  • IBM Spyre Accelerator that will be available in the fourth quarter of 2025 via PCIe card. Spyre will bring genAI tools to the mainframe and run assistants leveraging data on the system.
  • Spyre will enable z17 to run a host of IBM Granite models natively.
  • A series of AI agents and assistants from IBM watsonx including watsonx Code Assistant for Z and Assistant for Z. Watsonx Assistant for Z will be integrated into Z Operations Unite for AI-driven incident detection and resolution.
  • Z Operations Unite, available in May, combines logs from IBM Z in OpenTelemetry format to streamline operations and detect anomalies.
  • IBM will also include HashiCorp tools to standardize secrets management.
  • z17 will also include data security tools via Telium to identify and protect data with natural language.

In addition to the AI focus, IBM z17 will feature z/OS 3.2 that will be released in the third quarter. The operating system will support AI capabilities, modern data access methods, NoSQL databases and hybrid cloud data processing.

Constellation Research's take

R "Ray" Wang, CEO of Constellation Research, said:

"While many may have written off the mainframe, there are three reasons this release is significant:

  • Y2Q is closer than we think and the Z is quantum secure.
  • Cost per MIPS/kwh gives the Z very efficient performance.
  • AI is one area where a return to on-premises is a real value prop.

When speaking with existing customers who are using mainframe, they are mostly looking to increase their investment."

Constellation Research analyst Holger Mueller said:

"The typical adage of hardware platforms going to die does not apply to the IBM mainframe. Z has had at least 7 lives and is now back in the AI era. And its core value proposition of bringing data and processing power into one place - is even more attractive in the AI era compared to long passed client server era. Kudos goes to IBM for innovating and delivering value on System Z for enterprises over decades (APIs, Java, hybrid cloud and more come to mind as highlights of the last 20 years)."

Data to Decisions Tech Optimization IBM Chief Information Officer

Vint Cerf on AI, critical thinking, the internet's future

Vint Cerf on AI, critical thinking, the internet's future

Vint Cerf said the introduction of AI is about much more than technology so much so that it may make sense to have a few sociologists, psychologists and anthropologists to help with policy.

Cerf, currently Vice President and Chief Internet Evangelist for Google, contributes to global policy development and is widely known as a father of the internet since he was a co-designer for TCP/IP protocols and architecture.

In an interview with DisrupTV, Cerf touched on multiple topics including space, AI and future connectivity. "We're entering into a period of abundance of computing and communication capability that will, I think, enable some amazing accomplishments over time," said Cerf. "It's hard to believe that so much has happened over the past 50 years, and there's so much more to go."

Here are the key takeaways from Cerf:

Evolution of internet capacity. "The increasing capacity of the internet to move data" is a significant development, said Cerf. He said there's also a need for higher speeds and optical fiber to accommodate a growing base of internet users.

Expansion beyond Earth. Cerf said "the expansion of the internet to low Earth orbiting satellite systems and off-planet expansion, including the development of an interplanetary internet backbone" is the future of connectivity.

AI's role in technology. Cerf emphasized the importance of AI in "accelerating software development and improving accessibility for people with disabilities." Cerf said AI can enhance user experiences.

Critical thinking in the age of AI. Cerf said "the importance of critical thinking and understanding the provenance of information in the age of AI" is more critical now than it ever was. People need to be able to use critical thinking to understand what's real--and not.

Cerf said:

"When we introduce technology, we should bring with us sociologists and psychologists and anthropologists to help us understand the impact of the technology on the societies that are ingesting it. We didn't really do that as the internet. We really seriously need to do that today, because the technology we're now introducing artificial intelligence."

Internet responsibility. Cerf said that while the internet has connected like-minded people, it has also enabled harmful activities and filter bubbles. He emphasized "the need for accountability and responsibility" in online interactions.

Internet education. Cerf proposed the idea of an "internet driver's license" to educate people on safe and effective internet use, highlighting the importance of digital literacy.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM acquires Hakkoda

IBM acquires Hakkoda

IBM said it has acquired data and AI consultant Hakkoda in a deal that will give it data migration platform services. Hakkoda is a big partner for Snowflake.

The purchase gives IBM Consulting more heft in data migrations and transformation. IBM Consulting acknowledges that services to modernize data estates are critical to any future enterprise AI efforts.

Terms of the deal weren't disclosed. IBM has been rounding out its offerings with a series of acquisitions.

IBM said Hakkoda's portfolio includes:

  • Generative AI tools that speed up data modernization projects.
  • A strong customer base in financial services, public sector, healthcare and life sciences.
  • BI modernization tools.
  • Managed services for Snowflake.
  • Award winning services for Snowflake engagements. Hakkoda is an Elte Snowflake partner as well as an advanced-tier partner of AWS.

IBM said Hakkoda will also expand on IBM Consulting Advantage, a program that uses AI to speed up consulting delivery.

 

 

Data to Decisions Innovation & Product-led Growth IBM Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Meta launches Llama 4 suite, ups ante in LLM wars

Meta launches Llama 4 suite, ups ante in LLM wars

Meta launched Llama 4 family with open weights with two models available on AWS, Microsoft Azure and Google Cloud.

With the large language models (LLMs) wars well underway, Meta announced Llama 4 on a Saturday so it can have the spotlight for a bit. With LLMs developing at a rapid clip, one advance can be overshadowed by another in just hours.

According to Meta, the first installment of the Llama 4 suite is just the start. The company said more details about its AI vision will be outlined at LlamaCon on April 29.

Here's the Llama 4 family, which features natively multimodal AI, announced so far:

Llama 4 Behemoth: A 288B active parameter model with 16 experts and 2 trillion total parameters. Behemoth is in preview, still training and is Llama's most intelligent teacher model for distillation.

Meta said other Llama 4 models in its suite are distilled from Behemoth, which outperforms GPT-4.5, Claude Sonnet 3.7 and Gemini 2.0 Pro on multiple STEM benchmarks.

Llama 4 Maverick: A 17B active parameter model with 128 experts and 400B total parameters. Maverick, which is available now, is native multimodal with 1M context length.

Maverick competes and wins against OpenAI's GPT-4o and Gemini 2.0 Flash and has comparable results to DeepSeek v3 on reasoning and coding.

Llama 4 Scout: A 17B active parameter model available now with 16 experts and 109B total parameters. Scout has a 10M context length and is optimized for inference.

Meta said Scout is more powerful than all of the previous Llama models and fits in a single Nvidia H100 GPU. Meta said Scout outperforms Gemma 3, Gemini 2.0 Flash-Lite and Mistral 3.1.

Meta's Llama 4 launch is notable because enterprises are using it widely via the big three cloud providers. AWS said Llama 4 is available on Amazon Bedrock, Microsoft Azure features it on Azure AI Foundry and Google Cloud has the LLM family on Vertex AI. Companies are taking Llama models, which are available on Hugging Face and Llama.com, and tailoring them to specific use cases.

Meta CEO Mark Zuckerberg has said the company's goal is to make Llama as the flagship open source model and compete with or top proprietary competitors. Meta is also leveraging Llama 4 throughout its properties so taking it for a spin requires WhatsApp, Instagram or Facebook.

Here's a look at the Llama 4 architecture followed by benchmarks for Maverick.

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"If someone wonders who is the leader in a tech category, watch the announcements before a major user conference by rivals. In this case, Google Cloud Next is looming, and both Microsoft and Meta make their boldest AI announcements yet. in Metas cases even previewing its Behemoth model is designed to stake out new ground. The Meta pitch of proving Llama across the three major cloud providers appeals to CxOs as it allows AI automation portability across cloud providers. On the flip side, the gap to Google is clear when it comes to Meta. Llama 4 is 12 months late with the 1M context window and being multimodal with Llama 4 Maverick. That was the news at Google Cloud Next in 2024."

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity meta AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

After volatile first quarter, these 10 questions loom over enterprise technology, CxOs

After volatile first quarter, these 10 questions loom over enterprise technology, CxOs

In enterprise technology you don't always have the answers, but that should never stop you from knowing the right questions. With that in mind, let's ponder a few questions now that the first quarter is closed and the second quarter is just ramping up. These themes are likely to reappear in our news and analysis in the months ahead.

Here’s a look at the big questions for the second quarter.

What's the state of enterprise tech spending? The first quarter earnings calls and outlook for the second quarter and 2025 are about to get really interesting. The enterprises reporting results in the second half of March mostly pointed to either a slowdown or a pause in spending.

Geopolitics are a big reason why enterprises can't plan ahead. You can expect a lot of tech vendors talking about elongated selling cycles, more deliberation over deals and spending pauses.

For a glimpse of how CEOs are on the struggle bus with geopolitical planning, check out Restoration Hardware CEO Gary Friedman, who was volleying with analysts about his supply chain right as President Trump was announcing tariffs. "Like we're just really well-positioned right now. I think housing is the headwind. I guess the stock went down based on some of the numbers we reported and then it got killed because of a--oh shit okay--I just looked at the screen. I hadn't looked at it," said Friedman, who was talking as Vietnam was hit with a big tariff. "It got hit when the tariff came out and everybody can see in our 10-K where we're sourcing from. It's not a secret and we're not trying to disguise it by putting everything in an Asia bucket."

There were a lot of Gary Friedmans this week. And those CxOs all buy enterprise technology.

Sign up for the Constellation Insights newsletter

What projects will get funded in a recessionary/stagflation environment? Optimization has been an ongoing theme in enterprise technology whether it's via automation or artificial intelligence. Some enterprises will look to reload in a downturn for the rebound. For instance, Rocket bought Redfin and Mr. Cooper in a move to consolidate the mortgage industry. These acquisitions are notable since they are also about acquiring the first-party data to feed models.

The time is now to position for the next economic cycle. How many enterprises will be able to reload?

Will mergers and acquisitions be justified based on first-party data? Rocket's acquisition spree (Redfin and Mr. Cooper) is notable because it's focused on buying the first party data that can be used in its models. However, there's an additional thread to consider here. Rocket covered why the acquisitions made sense based on 30 petabytes of data, but there are also good business reasons to make the purchases beyond training AI models. Rocket's acquisitions are based on industry consolidation and feeding the sales funnel too.

A diagram of a company's homeownership platform

AI-generated content may be incorrect.

Will agentic AI be a wait-and-see affair? We've had the vendor announcements. We've had the slideware talking about orchestration layers. We've even had vendors rolling out agents throughout their platforms just waiting for customers to use them under a consumption model.

However, CxOs in the Constellation Research network are taking a measured approach to AI agents. Yes, these enterprise leaders are bullish on AI agents and the promise. However, CxOs also realize that their AI agents will need to be cross-platform, have communication standards, and include a heavy dose of process knowledge and automation to really work.

Add it up and it sounds like the agentic AI spending curve is going to be more of a second half of 2025 phenomenon. Also see Constellation Research’s report on AI trends for 2025 and beyond.

A diagram of data collection

AI-generated content may be incorrect.

Will consumption models from SaaS vendors be welcome? Enterprises are starting to see their traditional SaaS contracts (seats and subscriptions) adding a consumption layer to account for agents? ServiceNow CEO Bill McDermott calls this hybrid monetization model a Goldilocks scenario. It's quite possible that consumption models will be a headache for IT budget forecasting.

Why can't vendors come up with something better than AI studio when it comes to product names? As vendors tout AI agent orchestration platforms the one common thread is the dreaded "AI Studio." Now it makes sense that these vendors need AI studios to design, manage and orchestrate AI agents, but a new name would be lovely.

Will quantum computing enterprise use cases go mainstream in 2025? The quantum computing announcements just keep coming with vendors claiming various breakthroughs and supremacy over classical computing.

So far, 2025 appears to be the year of quantum computing.

Is the AI infrastructure boom now a bubble? This question doesn't necessarily affect enterprises given that the AI infrastructure boom rides with a few companies you can count on two hands--Nvidia, OpenAI, Meta, Google Cloud, AWS and Microsoft Azure with funding from the likes of Blackrock and Softbank.

But the leverage, big plans and commoditization of models and potentially GPUs may point to turbulence ahead.

Do we need to start sketching out humanoid robotic projects? It’s clearly early, but the intersection of humanoid robotics and AI is going to be interesting.

Will mergers and acquisitions pick up? Google Cloud’s $32 billion purchase of Wiz indicates that the company thinks it can gain regulatory approval. Other acquisitions are more in the tuck-in variety. Nevertheless, the dealmaking appears to be picking up. Given that HPE still can’t get its Juniper acquisition across the finish line, I’d say the jury is still out on M&A activity.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft updates Copilot, Azure AI Foundry

Microsoft updates Copilot, Azure AI Foundry

Microsoft added a bevy of new features for Copilot to enhance memory, ability to take action and conduct research and also updated its Azure AI Foundry with tools to build AI agent systems.

The news, which lands as Microsoft celebrates its 50th birthday, lands as large language model (LLM) advancements are almost happening daily. Here's the rundown of Microsoft's Copilot updates. Also see: Google Gemini vs. OpenAI, DeepSeek vs. Qwen: What we're learning from model wars

  • Copilot now has memory and personalization to retain details from user interactions so it can add suggestions, reminders and personalized answers.
  • Microsoft said Copilot can take action on your behalf via Actions and navigate most sites on the web to book tickets and make reservations.
  • Copilot Vision is available on mobile and Windows.
  • Copilot Deep Research has been added to handle research tasks.
  • Microsoft added Copilot Search to bring Bing directly to Copilot as well as Pages, Podcasts and Shopping.

For Azure AI Foundry, which now has more than 60,000 customers, Microsoft outlined the following updates to position it as an agent factory:

  • AI Red Teaming Agent, which is in public preview, will test AI models to uncover safety vulnerabilities.
  • Agent evaluations, also in public preview, will provide risk and quality assessments for AI agent systems.
  • Semantic Kernel agent framework, which is available, simplifies the code developers need to build and coordinate multiple agents in a system.

More:

Data to Decisions Future of Work Microsoft Chief Information Officer