Results

TCS Acquires Coastal Cloud: Filling a Critical Gap for Salesforce’s Agentic Future

TCS has announced the acquisition of Coastal Cloud, a leading US-based Salesforce Summit Partner, in a $700 million all-cash transaction. The deal moves TCS into the top tier of Salesforce advisory and consulting firms globally and strengthens its ability to deliver AI-first, agent-driven transformation programs. Coastal Cloud brings Salesforce-native advisory depth, strong mid-market relationships, and close alignment with Salesforce product leadership through its role on the Salesforce Partner Advisory Board.

What Salesforce buyers are increasingly looking for

In conversations with enterprise buyers, the focus has shifted beyond implementation capacity. Salesforce customers are looking for partners that can connect platform decisions to business outcomes, design operating models around AI and agents, and scale these programs across regions and business units. As Salesforce advances Agentforce 360, buyers consistently point to the need for help with data readiness, governance, integration, and continuous optimization. This has widened the gap between boutique Salesforce specialists with deep platform expertise and large GSIs that bring scale but have often lacked senior Salesforce advisory leadership.

How this acquisition fills a gap for TCS customers

This is where the Coastal Cloud acquisition matters for TCS. In buyer discussions, TCS has been viewed as strong in enterprise scale, industry context, and global delivery, but Salesforce programs often started deeper in execution rather than advisory. Coastal Cloud adds that missing front-end capability. For TCS customers, Salesforce engagements can now begin with Salesforce-native business and industry advisory and then scale globally with consistent delivery, AI engineering, and governance. This becomes increasingly important as Salesforce programs shift from CRM optimization to agent-driven, cross-functional transformation.

Why this matters for Coastal Cloud customers

From a buyer perspective, Coastal Cloud customers have historically valued deep Salesforce expertise and close partnership. However, in conversations about scaling, global rollout, and integration with enterprise platforms, limitations often emerged. With TCS, these customers gain access to global delivery, vertical accelerators, and enterprise-grade AI capabilities, while retaining Salesforce depth and continuity. This is particularly relevant as Agentforce programs extend across sales, service, marketing, and revenue operations.

[Source: Salesforce]

Agentforce 360, GSIs, and the competitive landscape

Agentforce 360 signals a shift toward agents operating across business workflows, not just automating tasks. In buyer conversations, it is clear that delivering these programs requires process redesign, data unification, security, governance, and operational ownership at scale. This favors GSIs. Accenture and Deloitte have long paired Salesforce depth with strong business consulting. Cognizant and Infosys have invested heavily in Salesforce delivery and platform skills but are often perceived as more execution-led. Coastal Cloud gives TCS a clearer path to compete across this spectrum by strengthening Salesforce-native advisory leadership alongside its global delivery engine. The differentiator, as buyers note, will be who can operationalize agents reliably across the enterprise, not who can deploy them fastest.

What buyers should ask now

  • Does my Salesforce partner combine Salesforce-native advisory depth with global delivery scale for agent-driven programs?
  • How will Agentforce agents be governed, monitored, and evolved across regions and business units?
  • Can Salesforce agents be integrated with enterprise data, security, and non-Salesforce systems?
  • What industry-specific use cases and accelerators exist beyond generic Agentforce demonstrations?
     

Closing perspective

This acquisition reflects a clear market signal emerging in buyer conversations. Enterprises want fewer handoffs, stronger advisory up front, and partners that can carry agentic programs from design through sustained execution. With Coastal Cloud, TCS is closing a meaningful capability gap and positioning itself more directly for the next phase of Salesforce-led, agent-driven enterprise transformation.

Tech Optimization Innovation & Product-led Growth salesforce Chief Executive Officer Chief Information Officer Chief Product Officer

Broadcom CEO comments highlight build vs. buy AI debate

The companies that are looking to leverage artificial intelligence for competitive advantage are increasingly choosing to go custom. It's build over buy at massive scale.

Broadcom's fourth quarter earnings results were an eye opener for the industry after CEO Hock Tan laid out a few interesting tidbits. Broadcom is benefiting from XPUs, custom AI accelerators. Google's TPUs, which have emerged as a threat to Nvidia, account for a big chunk of Broadcom's revenue.

Tan also revealed that Anthropic is buying Google Ironwood TPUs, the latest generation. Some choice quotes:

  • "Our custom accelerated business more than doubled year-over-year, as we see our customers increase adoption of XPUs, as we call those custom accelerators in training their LLM and monetizing their platforms through inferencing APIs and applications."
  • "These XPUs, I may add, are not only being used to train and inference internal workloads by our customers, the same XPUs in some situations have been extended externally to other LLM peers, best exemplified at Google, where the TPUs use in creating Gemini, have also been used for AI cloud computing by Apple, Coherent and SSI as an example."
  • "Last quarter, Q3 '25, we received a $10 billion order to sell the latest TPU Ironwood racks to Anthropic. And this was our fourth customer that we mentioned. And in this quarter Q4, we received an additional $11 billion order from the same customer for delivery in late 2026."
  • "That does not mean our other two customers are using TPUs. In fact, they prefer to control their own destiny by continuing to drive their multiyear journey to create their own custom AI accelerators or XPU racks, as we call them. And I'm pleased today to report that during this quarter, we acquired a fifth XPU customer through a $1 billion order placed for delivery in late 2026."

The big takeaway is that custom is the thing right now. For AI workloads at scale, this build over buy conclusion isn't that surprising. Google's TPUs are gaining favor. AWS launched its Trainium 3 processor and outlined Trainium 4. These hyperscalers are going custom to optimize for costs and monetize as soon as they stand up data centers.

Tan said customers are choosing to go custom for multiple reasons, but price-performance is the big one. Rivian also noted that agility is a big factor. Rivian's custom AI processor enables it to get started on software well before the chip lands.

The move toward custom components for AI systems is notable, but the market is immature. When markets are young, you tend to build your own stuff. Ask Amazon and Google. The big question is whether this custom-all-the-time approach lasts. Tan provided a bit of history when asked about the future of the XPU.

He said:

"You is don't follow what you hear out there as gospel. It's a trajectory. It's a multiyear journey. And many of the players, and not too many players, doing LLM wants to do their own custom AI accelerator for very good reasons. You can put in hardware if you use a general purpose GPU, you can only do in software and kernels and software. You can achieve performance-wise so much better in the custom purpose-designed, hardware-driven XPU."

Will that mean custom approaches will be dominant over time? Not at all. Tan said:

"Will that mean that over time, they all want to go do it themselves? Not necessarily. And in fact, technology in silicon keeps updating, keeps evolving. And if you are an LLM player, where do you put your resources in order to compete in this space, especially when you have to compete at the end of the day against merchant GPUs who are not slowing down in the rate of evolution. I see that as this concept of customer tooling is an overblown hypothesis, which frankly, I don't think will happen."

These comments are notable if you expand it to broader enterprises. My take:

  • Buy over build makes a lot of sense right now for enterprises, not necessarily at the hardware stack. If you can use AI to code and transform it's possible that you don't need to pay your SaaS tax. As for hardware, you’ll consume custom compute from cloud providers.
  • Agentic AI interfaces could relegate a lot of your applications to plumbing. See: The enterprise LLM questions you should be asking | Agentic AI: Is it really just about UX disruption for now?
  • OpenAI and Anthropic see this trend and are increasingly tapping into enterprise processes. See: AI agents, automation, process mining starting to converge
  • Vendors will tell you repeatedly that building your own systems is a fool's errand, but if the focus is on process the strategy makes sense. However, Tan noted repeatedly that the custom route is a multiyear journey. The same multiyear approach matters for software too.
  • In the end, enterprises want to control their own destinies and be agile. Locking in to any one vendor means you have no leverage. This fact applies to your data layer too and vendors like Databricks and Snowflake. See: AI strategies and projects: The hope, the fear and everything in between
  • Enterprises are likely to think about custom apps because they're tired of SaaS costs rising as much as health care costs. Perhaps the suite always wins, but that phase in the AI app market may not arrive for years.

Related:

 

Data to Decisions Tech Optimization Chief Information Officer

Veeam and Securiti: Data Trust Redefines Security Strategy

Veeam completed the acquisition of Securiti today, a move that reflects how customer expectations are changing as AI becomes embedded across enterprise workflows.

For a long time, enterprises approached data protection and security through an operational lens. Backups focused on recovery. Security tools focused on infrastructure and access. Governance lived in a separate world, often driven by compliance teams. Those boundaries are now breaking down, and data itself is moving to the center of security decision-making.

AI is the catalyst.

AI changed how data behaves, and that changed what security teams need

AI has turned previously dormant data into active fuel. Unstructured documents, logs, recordings, and historical files are now being indexed, summarized, embedded, and reused across copilots and agent-driven workflows. Data is no longer static or slow moving. It is accessed, transformed, and recombined at machine speed.

That shift exposes a problem many organizations have lived with for years but could afford to ignore. Most enterprises do not have a consistent, up-to-date understanding of what data they have, where it lives, who can access it, and what risk it carries.

In AI-driven environments, that gap moves beyond governance and becomes a delivery issue. Security, privacy, and risk teams increasingly slow or pause AI initiatives because they cannot establish trust in the data supply chain quickly enough.

[Source: Veeam]

Why data awareness is moving closer to the core platform

This is where capabilities such as data discovery, classification, and risk context start to matter more. Often described as data security posture management (DSPM), these capabilities help organizations continuously understand sensitive data across structured and unstructured environments and apply policy-driven controls.

What is changing is the role these capabilities play. Data awareness is becoming foundational to how security, governance, and AI programs operate, rather than something added later.

Securiti’s role in this shift reflects what buyers are looking for: persistent visibility into data, contextual understanding of sensitivity and risk, and the ability to apply consistent policies as data moves and is reused. As AI usage expands, that visibility becomes essential.

From “can we recover” to “can we recover and trust what we restored”

Another shift I see in buyer conversations is a change in how recovery success is defined.

Restoring systems quickly is no longer sufficient. Teams want confidence that restored data is clean, compliant, and safe to reuse. In AI-driven environments, restored data is often reintroduced into analytics, search, or downstream AI workflows, which amplifies any underlying data issues.

Deeper data understanding increasingly influences operational outcomes. Knowing what data is sensitive, what data was impacted, and what data should be prioritized or restricted now carries as much weight as the mechanics of recovery.

What this means for enterprise buyers

The broader takeaway from this acquisition goes beyond one vendor’s roadmap and points to how enterprise buying criteria are evolving.

Buyers are increasingly looking for platforms that:

  • Provide continuous visibility into sensitive data across environments
  • Apply consistent policies to data, regardless of where it resides or how it is accessed
  • Support AI use cases without introducing unmanaged data risk
  • Connect data understanding to real operational actions, including recovery and reuse

This does not imply that every organization needs a single, monolithic platform. It does suggest that fragmented approaches, where data insight, security controls, and operational processes remain siloed, are becoming harder to sustain.

Where this leaves security leaders

Veeam’s acquisition of Securiti reflects a broader market reality. AI has shifted the center of gravity in security from systems to data. As data becomes more fluid, more valuable, and more exposed, enterprises need stronger, more integrated ways to understand and control it.

Data discovery and classification may not be the most visible parts of an AI strategy, but they are quickly becoming some of the most consequential. Security, governance, and recovery all now converge on a single prerequisite.

Do you actually trust your data?

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer CISO

Rivian’s AI strategy: Four takeaways

Rivian's vertically integrated approach to autonomous driving and AI is enabled by its data flywheel that it uses to train its models and optimize.

The automaker held its first Autonomy & AI Day and perhaps the biggest lesson is that Rivian is an example of a company using its first-party data to develop new opportunities.

Rivian CEO RJ Scaringe highlighted the company's strategy, which revolves around owning its AI stack. That stack includes purpose-built silicon and a platform that used ingest data and train models. Rivian is looking to get into the AI and autonomy game, which includes the likes of Tesla as well as Alphabet unit Waymo.

"Directly controlling our network architecture and our software platforms in our vehicles has, of course, created an opportunity for us to deliver amazingly rich software. But perhaps even more importantly, this is the foundation of enabling AI across our vehicles and our business," said Scaringe.

Key news items from Rivian's investor meeting:

  • Rivian unveiled its Rivian Autonomy Processor (RAP1), a custom 5nm processor that integrates processing and memory on a single multi-chip module.
  • RAP1 features RivLink, which is a low latency interconnect technology that networks chips for more processing power.
  • The company outlined its third-gen Autonomy computer, or Autonomy Compute Module 3 (ACM3). ACM3 can process 5 billion pixels per second.
  • Rivian has an in-house developed AI compiler and platform. The platform, the Rivian Autonomy Platform, features an end-to-end data loop and its Large Driving Model (LDM), which is an LLM for driving. The LDM will distill strategies from Rivian's datasets.

Going forward, Rivian plans to integrate LiDAR into its upcoming R2 models at the end of 2026. LiDAR augments Rivian's multi-sensor strategy. Rivian also said it will add Universal Hands-Free driving features to its second-gen R1 vehicles. The system will be available on 3.5 million miles of roads in the US and Canada.

Rivian's AI strategy beyond autonomy includes Rivian Unified Intelligence, a foundation of multi-modal and multi-LLMs and data. The platform is designed to enable Rivian to roll out features, improve service and offer predictive maintenance. Rivian is also launching a next-gen voice interface in early 2026 that uses its edge models, third party integrations, and reasoning LLMs.

Beyond the news barrage from Rivian, there are multiple takeaways from the company's strategy meeting. Here are a few:

Rate of change only increasing. Rivian has created an architecture that can adapt to the pace of change. Enterprises will need to work under the assumption that the rate of change over the next five years will be much faster than the last five years.

"If we look forward 3 or 4 years into the future, the rate of change is an order of magnitude greater than what we've experienced in the last 3 or 4 years," said Scaringe.

Also: Uber outlines its autonomous vehicle plan | GM to integrate Google Gemini, delivered unified software defined vehicle architecture

First party data is everything. "Our approach to building self-driving is really designed around this data flywheel. We're a deployed fleet, has a carefully designed data policy that allows us to identify important and interesting events that we can use to train our large model offline, before distilling the model back down into the vehicle," said Scaringe.

AI will touch every process. Rivian is leveraging its AI backbone for its vehicles and autonomous efforts. But Rivian's AI backbone also runs through the enterprise. Scaringe said its AI strategy will impact its sales and service model, supply chain and manufacturing infrastructure.

You may need to build your own. Vidya Rajagopalan, Senior Vice President of Electrical Hardware at Rivian, explained why the company had to develop its own processors. She said:

"It's important to address why we chose to build in-house silicon. The reason for doing it is velocity, performance and cost.

With our in-house silicon development, we're able to start our software development almost a year ahead of what we can do with supplier silicon. We actually had software running on our in-house hardware prototyping platform well ahead of getting first silicon. Our hardware and software teams are actually co-located and they're able to develop at a rapid pace that is just simply not possible with supplier silicon."

Rajagopalan said the ability to customize is also critical for designing for current use cases and the future. In addition, Rivian can optimize to save money.

Think multiple models. Wassym Bensaid, Chief Software Officer at Rivian, said the company has developed its own model for driving, but has a "suite of specialized agents."

"Every Rivian system from manufacturing, diagnostics, EVR planning, navigation becomes an intelligent node through MCP. And the beauty here is we can integrate third-party agents. And this is completely redefining how apps in the future will integrate in our cars," said Bensaid. "We orchestrate multiple foundation models in real time, choosing the right model for each task. And we support memory and context, allowing us to offer advanced levels of personalized experience."

Bensaid said the use of multiple models and Rivian's architecture is designed to move workloads from the cloud to the edge. Rivian Unified Intelligence is the connective tissue.

Data to Decisions Next-Generation Customer Experience Chief Information Officer

Custom AI processors mean Broadcom printed money in Q4

Broadcom reported better-than-expected fourth quarter results as it continued to see a revenue surge due to custom AI chips.

The company reported fourth quarter net income of $8.52 billion, or $1.74 a share, on revenue of $18.01 billion, up 28% from a year ago. Non-GAAP earnings for the fourth quarter were $1.95 a share.

Wall Street was expecting non-GAAP earnings in the fourth quarter of $1.86 a share on revenue of $17.49 billion.

As for the outlook, Broadcom projected first quarter revenue of $19.1 billion, up 28% from a year ago.

Broadcom, despite acquiring VMware to beef up its software business, is still a hardware story. CEO Hock Tan said revenue growth was "driven primarily by AI semiconductor revenue increasing 74% year-over-year." Broadcom makes chips for Google and inked a deal for custom processors for OpenAI. 

Tan added that it expects momentum to continue in the fourth quarter driven by demand for custom AI accelerators and Ethernet AI switches.

In the fourth quarter, Broadcom's semiconductor business was 61% of sales and infrastructure software was 39%. Chip revenue was up 35% in the quarter and software was up 19%.

As a result, Broadcom is just printing money. Cash flow from operations in the fourth quarter was $7.7 billion, up 37% from a year ago. Free cash flow was up 36%. Broadcom's cash and cash equivalents checked in at $16.18 billion, up from $10.72 billion in the previous quarter.

For fiscal 2025, Broadcom reported net income of $23.13 billion, or $4.77 a share, on revenue of $63.89 billion, up 24% from fiscal 2024. 

Tan said on the earnings call:

  • "Our custom accelerated business more than doubled year-over-year, as we see our customers increase adoption of XPUs, as we call those custom accelerators in training their LLM and monetizing their platforms through inferencing APIs and applications."
  • "These XPUs, I may add, are not only being used to train and inference internal workloads by our customers, the same XPUs in some situations have been extended externally to other LLM peers, best exemplified at Google, where the TPUs use in creating Gemini, have also been used for AI cloud computing by Apple, Coherent and SSI as an example."
  • "Last quarter, Q3 '25, we received a $10 billion order to sell the latest TPU Ironwood racks to Anthropic. And this was our fourth customer that we mentioned. And in this quarter Q4, we received an additional $11 billion order from the same customer for delivery in late 2026."
  • "That does not mean our other two customers are using TPUs. In fact, they prefer to control their own destiny by continuing to drive their multiyear journey to create their own custom AI accelerators or XPU racks, as we call them. And I'm pleased today to report that during this quarter, we acquired a fifth XPU customer through a $1 billion order placed for delivery in late 2026."

Tech Optimization Data to Decisions Big Data Chief Information Officer CIO CTO Chief Technology Officer CISO Chief Information Security Officer CDO Chief Data Officer

OpenAI calls GPT-5.2 its most advanced model for work

OpenAI launched GPT-5.2 in what appears to be its answer to Google's Gemini 3.0. According to OpenAI GPT-5.2 is its most advanced mode for work and long-running agents.

The company leaned into the productivity case for GPT-5.2. In a blog post, OpenAI said:

"We designed GPT‑5.2 to unlock even more economic value for people; it’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects."

OpenAI touted the usual benchmarks for GPT-5.2 improvements, but it's notable that it is also using its GDPval benchmark too. GDPval looks at how models perform in knowledge work in 44 occupations.

With the positioning of GPT-5.2, OpenAI is clearly making the return on investment case for its latest foundational model as it competes with Google and Anthropic. Microsoft said it has added GPT-5.2 to Microsoft 365 Copilot, Copilot Studio, Microsoft Foundry and GitHub Copilot.

"GPT‑5.2 Thinking beats or ties top industry professionals on 70.9% of comparisons on GDPval knowledge work tasks, according to expert human judges. These tasks include making presentations, spreadsheets, and other artifacts. GPT‑5.2 Thinking produced outputs for GDPval tasks at >11x the speed and <1% the cost of expert professionals," said OpenAI.

The compare and contrast of the GPT-5.2 vs GPT-5.1 models is worth noting.

The upshot here is that OpenAI is pivoting on real world tasks for judging models. Perhaps, OpenAI is tired of ceding the corporate use cases to Anthropic's Claude.

As for the rollout, OpenAI said:

"In ChatGPT, we’ll begin rolling out GPT‑5.2 (Instant, Thinking, and Pro) today, starting with paid plans (Plus, Pro, Go, Business, Enterprise). We deploy GPT‑5.2 gradually to keep ChatGPT as smooth and reliable as we can; if you don’t see it at first, please try again later. In ChatGPT, GPT‑5.1 will still be available to paid users for three months under legacy models, after which we will sunset GPT‑5.1."

Data to Decisions Future of Work Chief Information Officer

OpenAI, Disney deal foreshadows where media is headed

Disney will license its stable of characters in a three-year licensing deal that will enable Sora users to create social videos.

Terms of the agreement include:

  • Sora will have access to more than 200 Disney, Marvel, Pixar and Star Wars characters.
  • Sora users can create videos that will be available to stream on Disney+.
  • Disney will use OpenAI models throughout its enterprise and become a major customer. Disney employees will use ChatGPT.
  • OpenAI APIs will be used to build new products.
  • Disney will invest $1 billion in OpenAI and have warrants to buy more shares.

On its own, the OpenAI-Disney partnership is standard issue. However, Disney is opening the door for other media companies to license IP and characters to models. After this deal, it's not a stretch to see Google Gemini do something similar. This OpenAI-Disney deal is the equivalent of putting Mickey Mouse on the Apple Watch.

Short term, media giants will license IP to AI players just like they do streaming companies like Netflix.

But the real thing to watch is whether media companies use LLMs to leverage their own IP. Media companies have historically been behind on new technology and AI isn't much different.

Here's what media companies should be doing:

  • Develop their own models powered by their own data just like enterprises do.
  • Create new experiences so customers can spin up their own episodes. The Simpsons may be the best training set ever for a model. Why not be able to spin up my own Bart adventure? AI can monetize vast libraries of content.
  • With an AI-driven approach, there's no reason why media companies couldn't create what essentially is the next streaming market.
  • Given that backdrop it's not surprising that Netflix and Paramount Skyworks are dueling to buy Warner Bros. Discovery. Rest assured the Ellison family, which controls Paramount Skydance, knows where this game is going. We're at the IP and data gathering phase of this game. The media company with the best first party data (characters, franchises and audience) can win the AI era.

At a panel at AWS re:Invent 2025, Albert Cheng, VP of AI for Prime Video, said AI is becoming the next streaming moment. Cheng said:

"I feel the same way today about AI as I did when I first started pushing streaming at Disney. It's the start of another transformation. Streaming transformed distribution and I think AI is going to transform the way content is created."

This mashup of AI and media is just starting. The deal between OpenAI and Disney is just the first volley.

Next-Generation Customer Experience Innovation & Product-led Growth Chief Information Officer

Elevating the Value of Speaking Up with Voice

AI is like a pebble (or boulder) dropped into a calm glassy pool we call experience. Once it hits the surface, AI creates ripples that can shift and change that still calm in weird and wonderful ways. Arguably, the first ripple was AI’s capacity to amplify intelligence and change where and how we could turn conversations-into-data, data-into-intelligence and intelligence-into-decisions. The second ripple was generative AI’s capacity to ingest and generate content from text prompts, delivering everything from bold new images to stunningly accurate summaries.

Now, we prepare for the third ripple: AI’s capacity to deliver voice-first engagements.

What is a voice-first engagement?

  • A voice-first experience is one that leverages AI to power spoken language as the primary form of engagement
  • In a voice-first experience, the customer or employee simply asks a question to launch a bi-directional conversation
  • What started as robotic voices and limited responses are now full conversations between AI and people with human-like voices, inflections and emotions
  • The smartest voice-first experiences thread conversations across channels of choice, can connect to fully self-services journeys or connect to live, human agent engagement

It is hard to deny that AI-powered voice generation is a hot conversation. Foundation models like Amazon’s Nova Sonic make building with voice-first far more accessible. Partnerships like that between ServiceNow and 3CLogic continue to expand, making deploying and reshaping experience with voice seamless and smart.

In announcing a new layer to a long-standing partnership, ServiceNow and 3CLogic intend to connect conversations more directly to relationships by bringing AI Voice Agents into enterprise workflows. In a press release, VP of CRM and Industry Workflows at ServiceNow, Terence Chesire, emphasized that these voice-first agents and experiences could “automate service at scale, improve efficiencies and deliver experiences that feel more human.”

It’s this connection to humanity that will make a voice-first AI experience strategy critical as we head into the next phase of the AI maturity curve that transitions AI agents into proactive experience-empowering AI assistants.

So why voice-first? Because customers expect it. Full stop. Today’s customer service and support landscape is not the only place voice is leading the way. From home (“Hey Alexa, order that new snack everyone is talking about!”) to the car (“Siri, I’m lost and need to charge…get me home!”) to out and about shopping (“Find three other stores where this shirt is available and check if it’s on sale,”) voice has become the interface of choice for a growing list of engagements.

Hands-free self-service where an utterance is the prompt is quickly becoming the standard to streamline the customer’s journey to their resolution. What’s so different about this moment is the expectation for free flowing, more human connection. Customers have seen the generative capacity for voice and video and expect that same back and forth, not the cold monotony of a GPS mispronouncing street names. When they opt into an AI powered experience, they still expect humanity to shine through while not changing their own patterns of speech and behavior. They just want to talk.

Customers aren’t the only stakeholder with high expectations: experience leaders have their own list of demands. They expect quality of voice with total control and governance. This paves a path well beyond a “bot” or “assistant” rolling a recorded voice message. Instead, these new voice-first assistants come with the power of generative and agentic AI that can listen, question and engage, reason and even show appropriate emotions from humor, sympathy, formality but most of all empathy. This new voice adopts the tone, tenor, language and lingo of a brand turning a passive moment into a truly branded engagement.

As organizations forge a path into this voice-first era, there are new questions that must be answered and embraced enterprisewide.

What are the “obvious” moments for voice AI?

  • Service-led environments are a clear starting point.
  • The contact center can develop more self-service engagements to quickly deliver branded service through voice AI agents while human agents gain more time and bandwidth to tackle the more complex and more valuable service cases.
  • Internal employee experiences are also an opportunity for “internal customers” to have voice-first service experiences making it that much easier to ask for an explanation of health benefits and then simply asking to be re-enrolled.
  • IT service desks can also deploy voice AI agents to address more routine requests making the mundane feel more personal across the enterprise.

Where can voice push beyond obvious to deliver something more unexpected?

  • Marketing-led environments are ripe for voice AI agents.
  • Content transforms when a customer can ask questions of a website taking the marketing drive for content personalization to a new, far more interactive level.
  • The opportunity to transform sales and e-commerce motions thanks to voice AI agents is possible when AI models have access to real-time product and pricing data.

Creativity is the key limitation to where and how voice can be tomorrow’s interface. The questions experience leaders can now ask starts with where could we just have a conversation? Can a field service technician just call in and explain the work that has just been completed without ever typing a single word or tying up a human dispatcher’s time? Could a customer ask about the latest deals and promotions before their shopping gets underway? Could employees just ask for time off?

But there is another list of questions to be asked when seeking out conversations to deploy including does the technology being embedded have the guardrails and controls to ensure safe and operationally observable conversations? Can you customize and control the voices that are now engaging directly with the people who matter most to the business, namely customers and employees? While teams can try to script empathy, can the voice AI being deployed be trained to be funny?

Thanks to voice AI, organizations have an opportunity to speak up. A brand’s voice can greet a customer in the channels and interfaces of the customer’s choosing. Passive recordings become a thing of the past as voice AI drives new conversations that fill the experience gap without sacrificing policies or cost. So, welcome to the new age of experience where modern AI empowered flows effectively guide a customer from curiosity to cart without lifting a finger. Thanks to voice AI, it’s time to speak up without straining our voices.

 

Image AI generated using Adobe Firefly Image 4. No real racoons were asked to wait on hold.

Future of Work Marketing Transformation Next-Generation Customer Experience Tech Optimization Data to Decisions Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity B2C CX ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Executive Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer CEO CIO CTO Chief Technology Officer CAIO Chief AI Officer CDAO Chief Data Officer CAO Chief Analytics Officer CISO Chief Information Security Officer CPO Chief Product Officer

Live from AWS re:Invent: Partner Award Interviews

Larry Dignan sat down with this year's AWS Partner Award winners, each offering a unique perspective on how AWS partnerships are transforming cloud and AI, and driving customer outcomes on a global scale.  

Here’s what our guests had to say:

  • Julia Chen (AWS Partner Core) – Shared what makes the AWS partner ecosystem thrive, emphasizing innovation, customer obsession, and the launch of new AI competencies and managed service offerings.
  • Jennifer Jackson (Accenture, Global Consulting Partner of the Year) – Reflected on Accenture’s 15-year journey with AWS, co-innovating to deliver fraud detection and data marketplace solutions that significantly improve client efficiency and accuracy using Gen AI.
  • Maureen Little (WRITER, GenAI Innovator of the Year) – Explained how Writer has focused on enterprise AI from day one, building with AWS to deliver secure, flexible platforms that empower creative end users while giving IT full control and robust governance.
  • Olivier Zieleniecki (MongoDB, Technology Partner of the Year) – Highlighted MongoDB Atlas’s deep integration with AWS, enabling customers to accelerate AI and modernization projects with impressive, real-world business results.
  • Chris Stewart (CrowdStrike, Marketplace Partner of the Year) – Talked about crossing $1B in AWS Marketplace transactions and how putting customers at the center—plus securing AI and agentic workflows—drives CrowdStrike’s approach and success.
Chief Analytics Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/bMaDmBwVLLQ?si=Gv2Wvl2bL09zhjVl" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Adobe ups outlook for fiscal 2026

Adobe reported better-than-expected fourth quarter results as the company saw strong adoption of its AI-driven products.

The company reported fourth quarter earnings of $1.86 billion, or $4.45 a share, on revenue of $6.19 billion, up 10% from a year ago. Non-GAAP fourth quarter earnings were $5.50 a share.

Wall Street was looking for Adobe to report non-GAAP earnings of $5.40 a share on revenue of $6.11 billion.

CEO Shantanu Narayen said the company is advancing its generative and agentic AI platforms and targeting double-digit annual recurring revenue growth in the fiscal year ahead.

Adobe’s recent acquisition of Semrush will bolster the digital experience platform, said Anil Chakravarthy, President of Adobe’s Digital Experience unit. “The pending acquisition of Semrush, which we announced a few weeks ago, brings complementary assets to help us address marketers’ growing need for sustained brand relevance in AI search,” he said.

Narayen said the vision for business professionals and consumers is to deliver "new conversational and agentic interfaces in Adobe Reader, Acrobat and Express to provide a freemium integrated experience for billions of users."

The vision for creators is to "deliver the most comprehensive power and precision applications from ideation and creation to production and delivery," said Narayen. He said the goal for marketing pros is to deliver the tools to "create a brand or address the expanding needs of the content supply chain in the era of AI to deliver customer experience orchestration solutions."

For fiscal 2025, Adobe reported earnings of $16.70 a share on revenue of $23.77 billion, up 11% from a year ago.

Adobe saw strong demand in all of its customer groups, according to CFO Dan Durn. Subscription revenue for Adobe was $5.96 billion, up 12% from a year ago. Business professional and consumer subscription revenue was up 15% from a year ago, and creative and marketing professional subscription revenue was up 11%.

As for the outlook, Adobe projected first quarter non-GAAP earnings of $5.85 a share to $5.90 a share on revenue of $6.25 billion to $6.30 billion. For fiscal 2026, Adobe projected non-GAAP earnings of $23.30 to $23.50 per share on revenue of $25.9 billion to $26.1 billion.

Data to Decisions Marketing Transformation Matrix Commerce Next-Generation Customer Experience adobe Chief Information Officer Chief Marketing Officer Chief Technology Officer