This list celebrates changemakers creating meaningful impact through leadership, innovation, fresh perspectives, transformative mindsets, and lessons that resonate far beyond the workplace.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
OpenAI launched GPT-5.2 in what appears to be its answer to Google's Gemini 3.0. According to OpenAI GPT-5.2 is its most advanced mode for work and long-running agents.
The company leaned into the productivity case for GPT-5.2. In a blog post, OpenAI said:
"We designed GPT‑5.2 to unlock even more economic value for people; it’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects."
OpenAI touted the usual benchmarks for GPT-5.2 improvements, but it's notable that it is also using its GDPval benchmark too. GDPval looks at how models perform in knowledge work in 44 occupations.
With the positioning of GPT-5.2, OpenAI is clearly making the return on investment case for its latest foundational model as it competes with Google and Anthropic. Microsoft said it has added GPT-5.2 to Microsoft 365 Copilot, Copilot Studio, Microsoft Foundry and GitHub Copilot.
"GPT‑5.2 Thinking beats or ties top industry professionals on 70.9% of comparisons on GDPval knowledge work tasks, according to expert human judges. These tasks include making presentations, spreadsheets, and other artifacts. GPT‑5.2 Thinking produced outputs for GDPval tasks at >11x the speed and <1% the cost of expert professionals," said OpenAI.
The compare and contrast of the GPT-5.2 vs GPT-5.1 models is worth noting.
The upshot here is that OpenAI is pivoting on real world tasks for judging models. Perhaps, OpenAI is tired of ceding the corporate use cases to Anthropic's Claude.
As for the rollout, OpenAI said:
"In ChatGPT, we’ll begin rolling out GPT‑5.2 (Instant, Thinking, and Pro) today, starting with paid plans (Plus, Pro, Go, Business, Enterprise). We deploy GPT‑5.2 gradually to keep ChatGPT as smooth and reliable as we can; if you don’t see it at first, please try again later. In ChatGPT, GPT‑5.1 will still be available to paid users for three months under legacy models, after which we will sunset GPT‑5.1."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Sora will have access to more than 200 Disney, Marvel, Pixar and Star Wars characters.
Sora users can create videos that will be available to stream on Disney+.
Disney will use OpenAI models throughout its enterprise and become a major customer. Disney employees will use ChatGPT.
OpenAI APIs will be used to build new products.
Disney will invest $1 billion in OpenAI and have warrants to buy more shares.
On its own, the OpenAI-Disney partnership is standard issue. However, Disney is opening the door for other media companies to license IP and characters to models. After this deal, it's not a stretch to see Google Gemini do something similar. This OpenAI-Disney deal is the equivalent of putting Mickey Mouse on the Apple Watch.
Short term, media giants will license IP to AI players just like they do streaming companies like Netflix.
But the real thing to watch is whether media companies use LLMs to leverage their own IP. Media companies have historically been behind on new technology and AI isn't much different.
Here's what media companies should be doing:
Develop their own models powered by their own data just like enterprises do.
Create new experiences so customers can spin up their own episodes. The Simpsons may be the best training set ever for a model. Why not be able to spin up my own Bart adventure? AI can monetize vast libraries of content.
With an AI-driven approach, there's no reason why media companies couldn't create what essentially is the next streaming market.
Given that backdrop it's not surprising that Netflix and Paramount Skyworks are dueling to buy Warner Bros. Discovery. Rest assured the Ellison family, which controls Paramount Skydance, knows where this game is going. We're at the IP and data gathering phase of this game. The media company with the best first party data (characters, franchises and audience) can win the AI era.
At a panel at AWS re:Invent 2025, Albert Cheng, VP of AI for Prime Video, said AI is becoming the next streaming moment. Cheng said:
"I feel the same way today about AI as I did when I first started pushing streaming at Disney. It's the start of another transformation. Streaming transformed distribution and I think AI is going to transform the way content is created."
This mashup of AI and media is just starting. The deal between OpenAI and Disney is just the first volley.
Vice President & Principal Analyst
Constellation Research
About Liz Miller:
Liz Miller is Vice President and Principal Analyst at Constellation, focused on the org-wide team sport known as customer experience. While covering CX as an enterprise strategy, Miller spends time zeroing in on the functional demands of Marketing and Service and the evolving role of the Chief Marketing Officer, the rise of the Chief Experience Officer, the evolution of customer engagement, and the rising requirement for a new security posture that accounts for the threat to brand trust in this age of AI. With over 30 years of marketing experience, Miller offers strategic guidance on the leadership, business transformation, and technology requirements to deliver on today’s CX strategies. She has worked with global marketing organizations to transform everything from…...
AI is like a pebble (or boulder) dropped into a calm glassy pool we call experience. Once it hits the surface, AI creates ripples that can shift and change that still calm in weird and wonderful ways. Arguably, the first ripple was AI’s capacity to amplify intelligence and change where and how we could turn conversations-into-data, data-into-intelligence and intelligence-into-decisions. The second ripple was generative AI’s capacity to ingest and generate content from text prompts, delivering everything from bold new images to stunningly accurate summaries.
Now, we prepare for the third ripple: AI’s capacity to deliver voice-first engagements.
What is a voice-first engagement?
A voice-first experience is one that leverages AI to power spoken language as the primary form of engagement
In a voice-first experience, the customer or employee simply asks a question to launch a bi-directional conversation
What started as robotic voices and limited responses are now full conversations between AI and people with human-like voices, inflections and emotions
The smartest voice-first experiences thread conversations across channels of choice, can connect to fully self-services journeys or connect to live, human agent engagement
It is hard to deny that AI-powered voice generation is a hot conversation. Foundation models like Amazon’s Nova Sonic make building with voice-first far more accessible. Partnerships like that between ServiceNow and 3CLogic continue to expand, making deploying and reshaping experience with voice seamless and smart.
In announcing a new layer to a long-standing partnership, ServiceNow and 3CLogic intend to connect conversations more directly to relationships by bringing AI Voice Agents into enterprise workflows. In a press release, VP of CRM and Industry Workflows at ServiceNow, Terence Chesire, emphasized that these voice-first agents and experiences could “automate service at scale, improve efficiencies and deliver experiences that feel more human.”
It’s this connection to humanity that will make a voice-first AI experience strategy critical as we head into the next phase of the AI maturity curve that transitions AI agents into proactive experience-empowering AI assistants.
So why voice-first? Because customers expect it. Full stop. Today’s customer service and support landscape is not the only place voice is leading the way. From home (“Hey Alexa, order that new snack everyone is talking about!”) to the car (“Siri, I’m lost and need to charge…get me home!”) to out and about shopping (“Find three other stores where this shirt is available and check if it’s on sale,”) voice has become the interface of choice for a growing list of engagements.
Hands-free self-service where an utterance is the prompt is quickly becoming the standard to streamline the customer’s journey to their resolution. What’s so different about this moment is the expectation for free flowing, more human connection. Customers have seen the generative capacity for voice and video and expect that same back and forth, not the cold monotony of a GPS mispronouncing street names. When they opt into an AI powered experience, they still expect humanity to shine through while not changing their own patterns of speech and behavior. They just want to talk.
Customers aren’t the only stakeholder with high expectations: experience leaders have their own list of demands. They expect quality of voice with total control and governance. This paves a path well beyond a “bot” or “assistant” rolling a recorded voice message. Instead, these new voice-first assistants come with the power of generative and agentic AI that can listen, question and engage, reason and even show appropriate emotions from humor, sympathy, formality but most of all empathy. This new voice adopts the tone, tenor, language and lingo of a brand turning a passive moment into a truly branded engagement.
As organizations forge a path into this voice-first era, there are new questions that must be answered and embraced enterprisewide.
What are the “obvious” moments for voice AI?
Service-led environments are a clear starting point.
The contact center can develop more self-service engagements to quickly deliver branded service through voice AI agents while human agents gain more time and bandwidth to tackle the more complex and more valuable service cases.
Internal employee experiences are also an opportunity for “internal customers” to have voice-first service experiences making it that much easier to ask for an explanation of health benefits and then simply asking to be re-enrolled.
IT service desks can also deploy voice AI agents to address more routine requests making the mundane feel more personal across the enterprise.
Where can voice push beyond obvious to deliver something more unexpected?
Marketing-led environments are ripe for voice AI agents.
Content transforms when a customer can ask questions of a website taking the marketing drive for content personalization to a new, far more interactive level.
The opportunity to transform sales and e-commerce motions thanks to voice AI agents is possible when AI models have access to real-time product and pricing data.
Creativity is the key limitation to where and how voice can be tomorrow’s interface. The questions experience leaders can now ask starts with where could we just have a conversation? Can a field service technician just call in and explain the work that has just been completed without ever typing a single word or tying up a human dispatcher’s time? Could a customer ask about the latest deals and promotions before their shopping gets underway? Could employees just ask for time off?
But there is another list of questions to be asked when seeking out conversations to deploy including does the technology being embedded have the guardrails and controls to ensure safe and operationally observable conversations? Can you customize and control the voices that are now engaging directly with the people who matter most to the business, namely customers and employees? While teams can try to script empathy, can the voice AI being deployed be trained to be funny?
Thanks to voice AI, organizations have an opportunity to speak up. A brand’s voice can greet a customer in the channels and interfaces of the customer’s choosing. Passive recordings become a thing of the past as voice AI drives new conversations that fill the experience gap without sacrificing policies or cost. So, welcome to the new age of experience where modern AI empowered flows effectively guide a customer from curiosity to cart without lifting a finger. Thanks to voice AI, it’s time to speak up without straining our voices.
Image AI generated using Adobe Firefly Image 4. No real racoons were asked to wait on hold.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Larry Dignan sat down with this year's AWS Partner Award winners, each offering a unique perspective on how AWS partnerships are transforming cloud and AI, and driving customer outcomes on a global scale.
Here’s what our guests had to say:
Julia Chen (AWS Partner Core) – Shared what makes the AWS partner ecosystem thrive, emphasizing innovation, customer obsession, and the launch of new AI competencies and managed service offerings.
Jennifer Jackson (Accenture, Global Consulting Partner of the Year) – Reflected on Accenture’s 15-year journey with AWS, co-innovating to deliver fraud detection and data marketplace solutions that significantly improve client efficiency and accuracy using Gen AI.
Maureen Little (WRITER, GenAI Innovator of the Year) – Explained how Writer has focused on enterprise AI from day one, building with AWS to deliver secure, flexible platforms that empower creative end users while giving IT full control and robust governance.
Olivier Zieleniecki (MongoDB, Technology Partner of the Year) – Highlighted MongoDB Atlas’s deep integration with AWS, enabling customers to accelerate AI and modernization projects with impressive, real-world business results.
Chris Stewart (CrowdStrike, Marketplace Partner of the Year) – Talked about crossing $1B in AWS Marketplace transactions and how putting customers at the center—plus securing AI and agentic workflows—drives CrowdStrike’s approach and success.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Adobe reported better-than-expected fourth quarter results as the company saw strong adoption of its AI-driven products.
The company reported fourth quarter earnings of $1.86 billion, or $4.45 a share, on revenue of $6.19 billion, up 10% from a year ago. Non-GAAP fourth quarter earnings were $5.50 a share.
Wall Street was looking for Adobe to report non-GAAP earnings of $5.40 a share on revenue of $6.11 billion.
CEO Shantanu Narayen said the company is advancing its generative and agentic AI platforms and targeting double-digit annual recurring revenue growth in the fiscal year ahead.
Adobe’s recent acquisition of Semrush will bolster the digital experience platform, said Anil Chakravarthy, President of Adobe’s Digital Experience unit. “The pending acquisition of Semrush, which we announced a few weeks ago, brings complementary assets to help us address marketers’ growing need for sustained brand relevance in AI search,” he said.
Narayen said the vision for business professionals and consumers is to deliver "new conversational and agentic interfaces in Adobe Reader, Acrobat and Express to provide a freemium integrated experience for billions of users."
The vision for creators is to "deliver the most comprehensive power and precision applications from ideation and creation to production and delivery," said Narayen. He said the goal for marketing pros is to deliver the tools to "create a brand or address the expanding needs of the content supply chain in the era of AI to deliver customer experience orchestration solutions."
For fiscal 2025, Adobe reported earnings of $16.70 a share on revenue of $23.77 billion, up 11% from a year ago.
Adobe saw strong demand in all of its customer groups, according to CFO Dan Durn. Subscription revenue for Adobe was $5.96 billion, up 12% from a year ago. Business professional and consumer subscription revenue was up 15% from a year ago, and creative and marketing professional subscription revenue was up 11%.
As for the outlook, Adobe projected first quarter non-GAAP earnings of $5.85 a share to $5.90 a share on revenue of $6.25 billion to $6.30 billion. For fiscal 2026, Adobe projected non-GAAP earnings of $23.30 to $23.50 per share on revenue of $25.9 billion to $26.1 billion.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Oracle reported a mixed second quarter and said it has sold its Ampere unit because it's not strategic to design and manufacture its own chips. Oracle CTO Larry Ellison said, "we are now committed to a policy of chip neutrality where we work closely with all our CPU and GPU suppliers."
Ellison noted that it will continue to buy the latest GPUs from Nvidia but "we need to be prepared and able to deploy whatever chips our customers want to buy," said Ellison, who added that "there are going to be a lot of changes in AI technology over the next few years and we must remain agile in response to those changes."
The company reported second quarter earnings of $6.13 billion, or $2.10 a share, on revenue of $16.06 billion, up 14% from a year ago. Non-GAAP earnings were $2.26 a share. Wall Street was looking for non-GAAP earnings of $1.64 a share on revenue of $16.19 billion. Oracle's earnings got a $2.7 billion pre-tax boost due to the sale of Ampere.
Oracle's closely watched cloud infrastructure sales were $4.1 billion, up 68% from a year ago. Cloud revenue overall in the second quarter checked in at $8 billion, up 34%. SaaS revenue was $3.9 billion in the quarter, up 11% from a year ago.
The company said that remaining performance obligations were $523 billion, up 438% from a year ago, due to large contracts with OpenAI. How Oracle will finance its data center buildout has been a hot topic for the company.
Oracle CEO Clay Magouyrk said Oracle is halfway through building 72 multi-cloud data centers embedded in AWS, Google Cloud and Microsoft Azure. Mike Sicilia, Oracle's other CEO, said the company is embedding AI and automation throughout its applications.
On a conference call with analysts, Doug Kehring, Principal Financial Officer, said Oracle said the fiscal year revenue expectation remains at $67 billion, but the added RPO this quarter will bump up revenue by $4 billion for fiscal 2027. Kehring said fiscal 2016 capital expenditures will be $15 billion higher than forecasted last quarter due to contracts that can be monetized quickly.
“While we continue to experience significant and unprecedented demand for our cloud services, we will pursue further business expansion only when it meets our profitability requirements and the capital is available on favorable terms,” said Kehring. He said third quarter total cloud revenue will be up 37% to 41% in constant currency.
According to Kehring, third quarter revenue will be up 16% to 18% with non-GAAP earnings between $1.64 and $1.68 a share in constant currency or $1.70 to $1.74 per share in US dollars.
Key points from the call, which revolved around noting that Oracle plans to be investment grade and build out when the infrastructure can be monetized. After all, Oracle spent $35 billion in capital expenditures for the third quarter with $13 billion in negative cash flow.
Magouyrk said it was difficult to put a number on funding needs. “We actually have a lot of different options for how we go about delivering this capacity to customers. There's obviously the way that people think about it, which is we buy all the hardware up front. But we don't actually incur any expenses for these large data centers until they're actually operational.”
Oracle’s capital expenditures can also change based on whether customers bring their own GPUs or decide to rent capacity, said Magouyrk.
Magouyrk said: “We continue to see strong demand for AI infrastructure across training and inferencing. We follow a very rigorous process before accepting customer contracts. This process ensures that we have all the necessary ingredients delivered to customer success at margins that make sense for our business. We continue to carefully evaluate all future infrastructure investments, invested only when we have alignment across all necessary components to ensure profitable delivery for our customers.”
“Over the next month, we see increasing customer demand with billions in identified pipelines,” said Magouyrk.
Sicilia said: “In our healthcare business, we now have 274 customers live in production on our clinical AI agent, and that number continues to rise daily.”
Vice President and Principal Analyst
Constellation Research
Michael Ni is Vice President and Principal Analyst at Constellation Research, covering the evolving Data-to-Decisions landscape—where CDOs, CIOs, and CPOs must modernize data infrastructure, integrate AI into decision-making, and scale automation to improve business outcomes.
Ni’s research examines how enterprises operationalize AI, automate decision-making, and integrate data management and analytics into core business processes. He focuses on the challenges of scaling AI-driven decision systems, aligning data strategy with business goals, and the growing role of data and decisioning “products” in enterprise ecosystems.
With 25+ years as a product and GTM executive across enterprise software, AI platforms, and analytics-driven technologies, Ni brings a practitioner’s perspective to…...
[Adapted from the transcript of Michael Ni's video interview]
As someone who closely tracks emerging trends in data, AI, and analytics, attending AWS re:Invent 2025 felt like a peek into the future of business innovation. This year’s announcements showcased a clear evolution in AWS’s strategy—not just focused on providing the raw infrastructure for innovation, but positioning itself as a powerhouse for enabling AI agents, automated decision-making, cross-cloud collaboration, and advanced analytics workflows.
For executives navigating today’s complex data landscape—whether you’re a CIO, CDAO, or business leader—here’s my breakdown of the most consequential themes from AWS re:Invent 2025, what they signal for the future, and how you might prepare your organization for this shift.
AI Gets Strategic: AWS Is Climbing Up the Stack
“It’s no longer just about storage and compute rentals,” I heard echoed in conversations throughout the event. AWS is reshaping its narrative by moving up the stack, directly targeting decision-making workflows and governance, and introducing tools for running autonomous AI agents. Historically, infrastructure has been a significant part of AWS's keynote strategy. However, this year we saw a surprising shift—key infrastructure announcements were squeezed into the final moments of Matt Garman’s presentation, delivered in rapid-fire fashion (25 announcements in ten minutes).
This pivot toward process automation and higher-level AI work is clearly a response to competitive dynamics. As Google and Microsoft continue to push higher-value AI experiences into the enterprise market—through tools such as Google Cloud’s AI offerings and Microsoft’s Fabric and Copilot expansions—AWS is adapting by streamlining automation agents and governance workflows.
For technology leaders, the implication is clear: AWS isn’t just renting compute and storage anymore; it wants to be the home of your AI-driven decision-making processes. This is a call to evaluate your architecture—specifically, where AI governance and process automation will sit in your stack.
Agents Are the New Runtime
If there’s one overarching theme that stands out, it’s the rise of AI agents as the next big runtime. AWS unveiled its agent-focused model operations platform, pushing “agents” as fully autonomous decision-makers that move beyond copilots. Key innovations here include the introduction of Agent Core and Frontier Agents within Bedrock.
Agent Core builds upon Bedrock’s prior functionality as a model endpoint platform, adding capabilities for observability, identity management, gateways, policy controls, and both short-term and long-term memory. Frontier Agents, meanwhile, represent specialized deployments, such as security agents, DevOps agents, and virtual team members, that can autonomously drive workflows over extended periods.
The real challenge lies in decision points around governance and human oversight. Questions such as “Who owns agent behavior? Should agents work within a human-in-the-loop framework or fully autonomously? How do we guard against risks when scaling automation?” will become critical as organizations deploy hundreds—or even thousands—of agents across processes. AWS is promising ROI with up to 5-to-10x productivity shifts, signaling that process automation driven by autonomous agents is worth serious investment heading into next year.
Unified Data Analytics Workspace: Sensing, Deciding, Acting
AWS is doubling down on its vision for a unified, comprehensive analytics workspace—one that integrates AI, data engineering, and decision automation seamlessly. The new updates to Studio, Notebooks, and QuickSight create a cohesive ecosystem for developers and analysts to sense, decide, and act all within a single environment.
Notable enhancements include the introduction of SageMaker data agents to address dynamic datasets, serverless notebooks (lightweight, scalable solutions ideal for deployment flexibility), and real-time catalog updates that enable metadata ingestion, notifications, and S3 integration.
The strategic goal here is clear: AWS is positioning itself as a competitor to Microsoft Fabric and Databricks, aiming to win by delivering a unified flow for semantics, governance, and decision-making—all while maintaining developer choice. As enterprises increasingly seek to integrate analytics and decision-making in a connected environment, this new offering holds promise for executives seeking to improve efficiency in managing complex workflows.
Cost-Efficient, Open Compute Remains Central
While AWS’s keynote pushed higher up the stack, foundational compute efficiency was far from neglected. Scaling cost-efficient AI workloads has become critical as organizations deploy increasingly complex and resource-intensive models, and AWS has delivered announcements such as the Graviton5 processors, the Training3 and Training4 roadmaps, and enhanced GPU performance for ultra-scaled environments.
A particularly noteworthy focus was on inference cost management—a FinOps trend designed to help enterprises optimize token-level spending while running large-scale AI workloads. Questions such as “How do we generate tangible cost savings while scaling hundreds or thousands of agents?” highlight the real economics of deploying AI at scale.
AWS’s investments here reinforce its value proposition for driving economic simplicity across the largest-scale deployments. Managing large-scale environments efficiently is central to AWS’s strategy to differentiate itself from competitors in the cloud AI space.
Multi-Cloud and Sovereign AI Strategies
AWS is embracing multi-cloud for the first time in a significant way—a notable shift from earlier years, when the narrative often centered on routing traffic exclusively to AWS regions. This year’s announcement of high-speed private connections between clouds, starting with Google Cloud, opens new possibilities for moving and sharing AI-driven workflows seamlessly across cloud boundaries.
Further investments into sovereign deployments, including specialized AI factories for regulated industries, highlight AWS’s understanding of evolving governance needs. In highly regulated sectors where data cannot reside outside specific geographical boundaries, solutions enabling decisions “where the data lives” (rather than where the cloud resides) will be crucial. As sovereign regulations tighten globally, expect these strategies to play an increasingly significant role in enterprise decision-making frameworks.
The Semantic Future
One of the major trends emerging—not just from AWS, but across the broader industry—is the rise of semantic layers as a bottleneck for trustworthy AI. Data catalogs are no longer just repositories for metadata; they’re becoming critical semantic and contextual layers that power AI-driven decisioning. Executives recognize the need to equip AI agents with the context required to ensure accuracy, governance, and trust across workflows.
A year ago, only a fraction of data leaders were talking about semantics. Today, over half of the executives I’ve spoken with cite context challenges for AI agents as a core roadblock to success.
Looking ahead, it’s clear that AWS is preparing to capitalize on this trend. Investments in areas such as S3 vector support, unified metadata systems, and Bedrock grounding updates for workflow context will likely expand into semantics next year. My bet for AWS re:Invent 2026? The company could establish itself as the go-to platform for semantic-driven enterprise workflows.
Closing Thoughts
AWS isn’t just building infrastructure anymore. It aims to solidify its position as the platform for running AI agents, automating governance, and driving reliable decisions across scalable, efficient, multi-cloud environments.
Whether your focus is boosting productivity with autonomous agents, unifying analytics environments, or managing inference costs at scale, these announcements align clearly with where the market is heading. In 2024, strategic investments into semantic-based governance, autonomous process automation, and efficient compute will become essential competitive differentiators.
Let’s continue the dialogue—I’d love to hear what resonated most for your organization. If you want a deeper analysis of the implications for your business, feel free to drop me a message. For now, I hope this breakdown helps frame the strategic shifts from AWS re:Invent and position your enterprise for the next wave of AI-driven transformation.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Anthropic, Block and OpenAI have contributed their technologies and emerging standards to connect AI agents to the Agentic AI Foundation, a project under The Linux Foundation.
The move highlights how one of the biggest issues with deploying AI agents is connecting and governing them. Anthropic, Block and OpenAI contributed Model Context Protocol (MCP), goose and Agents.md, respectively.
MCP is the best known standard to connect AI models to tools, data and applications, but goose, a local-first AI agent framework, and Agents.md, a coding agent standard, will also be key to connecting agents.
The goal of the Agentic AI Foundation (AAIF) is to provide a "a neutral, open foundation to ensure agentic AI evolves transparently and collaboratively."
At first glance, AAIF has most of the big AI agent players lined up. Platinum members include Amazon Web Services, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft and OpenAI.
AAIF is launching about a year after Anthropic released MCP, which has become a key standard. Multiple technology vendors have released MCP servers.
The AAIF noted that it has multiple members including Cisco, Datadog, Docker, IBM, Okta, Oracle, Salesforce, Snowflake and a host of others.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Accenture and Anthropic launched a partnership that revolves around driving Claude deployments in the enterprise.
The news lands a few days after Accenture announced a similar partnership with OpenAI.
Here's the reality with the Accenture and Anthropic and OpenAI deals: Implementing AI isn't as easy as portrayed and both LLM providers are going to need partners and integrators to scale. The hyperscale cloud providers, AWS, Google Cloud and Microsoft Azure, all have extensive partner ecosystems and Google Cloud and AWS have their own models.
Under the Anthropic deal, Accenture will train 30,000 consultants to drive Claude adoption and become a premier AI partner for using Claude Code.
The two companies will also deliver joint offerings for CIOs and deploying AI in financial services, life sciences, health and public sector.
For Anthropic, the Accenture deal is about distribution.
As for Accenture's OpenAI partnership, the consulting firm agreed to leverage ChatGPT Enterprise internally and be a primary AI partner.
The companies also launched an AI program also aimed at multiple industries including financial services, life sciences, health and public sector.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
CVS Health is betting it can leverage technology and AI to create an "engagement as a service" strategy and an integrated platform that ties together its brands that include Aetna, CVS Caremark, CVS Pharmacy and Health Care Delivery.
The company is outlining its plans at its investor day. CVS Health CEO David Joyner said the company is focused on "building a simpler, more connected and more affordable health care experience for consumers, health care professionals, and payors."
In late 2023, CVS laid out a strategy that revolved around a data flywheel across its customer base that would improve healthcare delivery across multiple touch points. At the time, CEO Karen Lynch touted a plan to "provide panoramic care" for its members. Lynch was replaced by Joyner in October 2024.
Although leadership changed, CVS Health's data assets and touch points remain. CVS Health is still sprawling with 300,000 employees, 185 million consumers and 1.5 million relationships with health care providers. In addition, CVS Health has multiple touch points via its pharmacies, health care delivery facilities and stores.
The working theory for CVS Health--and other large enterprises's--is that AI can be used to break down data silos and provide better experiences.
CVS projected 2025 revenue of at least $400 billion with operating income of $4.37 billion to $4.54 billion. For 2026, CVS sees revenue of at least $400 billion with operating income of $13.26 billion to $13.60 billion. CVS Health is projecting margin improvements at Aetna and CVS Caremark with continued earnings at CVS Pharmacy, which the company calls the "front door to healthcare."
The challenge and promise for CVS Health is integrating its various units so there's one platform to engage consumers and deliver healthcare. CVS Health's 2023 plan revolved around a data flywheel and since then AI has emerged and may make the company's plans easier to implement.
Here's the high level plan.
For this next chapter to come together, CVS Health will need to deliver more healthcare value via "Engagement as a Service” with one platform that'll combine its units to be useful to consumers.
CVS Health's argued that it has credibility as a technology provider is that AI is being used at the company's Aetna, CVS Caremark, CVS Pharmacy and health care delivery facilities. That knowhow will feed the engagement system and enable the company to be proactive, reach consumers and create an ecosystem.
There's no question that healthcare delivery can be more digital and customer friendly. The big question is whether CVS Health is the company to finally make headway on the problem.
Internal efficiencies
In a presentation, CVS Health argued that it can engage consumers better because it has already leveraged AI internally to become more efficient.
CVS Health said it has saved more than $1 billion to invest in new growth and AI.
Some examples of operating efficiencies include:
In the Aetna unit, the company has saved 90 minutes a day per nurse by consolidating clinical documentation, reduced call center volume by 30% and created one care management system from four disparate ones.
Aetna will accelerate interoperability of clinical data.
Oak Street Health, which provides health care services, has ambient AI scribe tools at 90% of its facilities. CVS Health's facilities include Oak St. Health, SignifyHealth and CVS Minute Clinic.
CVS Caremark is now able to process more than 300 claims per second at peak periods with a 99% first call resolution.
CVS Pharmacy will continue to invest in procurement and supply chain to drive down prices.
Does that ability to drive internal efficiencies mean CVS Health can become an engagement platform provider? We'll find out.
Meet the health care consumer engagement platform
The big plan from CVS Health is to create a platform that will reimagine health care. Tilak Mandadi, Chief Experience and Technology Officer at CVS Health, laid out the plan.
With better engagement, CVS Health can enhance access, simplify navigation and improve affordability. CVS Health is aiming to become a trusted source and consolidate a fragmented experience.
CVS Health certainly has the reach. CVS Health said more than 185 million consumers engage with the company each year and 85% of American live within 10 miles of a CVS Health location. With its omnichannel reach, CVS Health can ultimately commercialize its platform.
For CVS Health, one integrated engagement platform can drive growth for all of its units.
What's unclear is whether CVS Health can become a technology provider. The plan for CVS Health is to launch a series of technology and services in 2026. The play here is to not only engage its own customer base but create an ecosystem revolving around its platform.