This list celebrates changemakers creating meaningful impact through leadership, innovation, fresh perspectives, transformative mindsets, and lessons that resonate far beyond the workplace.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Fidji Simo, incoming CEO of Applications at OpenAI and Instacart CEO, penned her first missive and laid out an optimistic vision for AI.
Describing herself as a "pragmatic technologist", Simo said OpenAI has to get AI right so that it benefits as many people as possible. "Every major technology shift can expand access to power—the power to make better decisions, shape the world around us, and control our own destiny in new ways. But it can also further concentrate wealth and power in the hands of a few—usually people who already have money, credentials, and connections," said Simo.
She added that OpenAI has to be intentional about how it builds and shares its AI. Here are the takeaways from Simo's introductory memo.
Simo believes in OpenAI's potential. Yes, there's an AI hiring spree going on, but the most interesting item in Simo's hiring is this: Simo is a CEO of a publicly listed company that has a lot of runway ahead and has decided to leave to ride shotgun to an entrenched CEO. There aren't many executives that would make that move. Simo technically is CEO of Applications, but Sam Altman leads the charge.
The vision aligns with OpenAI's verticals. AI can change healthcare outcomes. AI can democratize education. It's not surprising that OpenAI is playing in those same areas.
Simo is likely to be a good buffer between OpenAI, Altman and sometimes tone deaf nerds. Simo's memo noted that she hired a tutor for biology and genetics and has a business coach. Humility and AI tech bros typically aren't mentioned in the same sentence. Before becoming CEO of Instacart, Simo was Vice President and Head of Facebook and also founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to the care and cure of neuroimmune axis disorders.
OpenAI is aiming to be a mass market AI coach that can be used for knowledge, health, creative expression, economic freedom, time and support.
A roadmap for applications will emerge to address those broad themes outlined in Simo's memo.As LLM capabilities all converge to be good enough, it'll be critical to create applications and ecosystems for staying power. Simo's job will be to develop OpenAI apps that can compete with frenemy Microsoft, Google and a bevy of others across the consumer and enterprise markets.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Verizon has rolled out AI customer experiences and is betting that the move will win accounts in a hotly contested wireless services market. The company announced a partnership with Google Cloud in April to deliver AI experiences with Gemini models and Verizon went live June 24.
Hans Vestberg, CEO of Verizon, said the company was leveraging AI to make customer experiences "simpler, faster and more rewarding." The June rollout includes a personalized expert for complex issues using Google Cloud and Gemini. A new Customer Champion will make sure issues are resolved.
Verizon also launched 24/7 live support and infused the new My Verizon app with an AI-powered Verizon Assistant and Savings Boost.
When Verizon announced better-than-expected second quarter earnings, Vestberg and CFO Anthony Skiadas were asked about returns on those Google Cloud investments.
"The wireless market remains competitive, and we continue to take a strategic and segmented approach, maintaining our financial discipline," said Vestberg. "As expected, postpaid phone churn remained elevated this quarter, reflecting the lingering effects of our pricing actions and ongoing pressure from federal government accounts. We're actively focused on improving retention by strengthening our value propositions, and leveraging our AI-powered customer experience innovations."
In the second quarter, consumer wireless retail postpaid churn was 1.12% and wireless churn overall was 0.90%. A year ago, retail postpaid phone churn was 0.85%. A year ago, Verizon didn't split out its metrics for business and consumer. Verizon reported adjusted earnings of $1.22 a share on revenue of $34.5 billion, up 5.2%, in the second quarter.
Vestberg said Verizon has seen upgrades down in year-over-year comparisons for 8 out of the last 9 quarters. Verizon did see an uptick in the second quarter and it is combining customer wireless plans with fixed wireless access and fiber broadband. Fixed wireless access accounts now have topped the 5 million subscriber mark.
Skiadas said Verizon is focused on keeping customers and courting new ones while preserving profits. "Volume growth is only valuable when aligned with our disciplined financial framework," he said. "We have taken a series of actions to address our elevated churn. On June 24, we launched initiatives designed to improve the customer experience, including leveraging AI for more personalized support. In addition, we continue to enhance our value proposition and build customer loyalty through the best value guarantee. We provide exclusive access to the best events and experiences and our refresh app helps customers maximize the value of their plans."
Vestberg emphasized that Verizon isn't going to sacrifice financials to acquire customers. He added that Verizon will turn up promotions when there are opportunities to acquire high-quality customers and let others go. With churn, Vestberg said he is optimistic about AI-driven experiences and "very encouraged about what the team is doing and how they are working on the loyalty and retention of our customers."
Verizon was also clear that retaining customers was paramount. Vestberg noted the following:
AI tools are critical, but Verizon has also retooled customer care processes. "When it comes to the process, we now will have a customer care employees following any request or a complaint from our customers all the way. So we actually finish it out with the same person starting and ending and also having uptake," said Vestberg.
24/7 customer service is also enabled by AI. "We're giving our customer care employees, an AI tool, so they can treat our customer better and know their problems better because this could be stressful," he said.
Stores matter. Vestberg said 93% of the population in the US has a Verizon store within 30 minutes.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Large language models (LLMs) have reached the phase where advances are incremental as they quickly become commodities. Simply put, it's the age of good enough LLMs where the innovation will come from orchestrating them and customizing them for use cases.
This development is great for enterprises, which will be able to buy perfectly serviceable private label, rightsized and optimized models without worrying about falling behind in 10 minutes. Foundation models are reaching the point where for some use cases there won't be much improvement with upgrades. Do you really care if a new LLM is 0.06% better in math or reasoning relative to another you plan to use in the call center on the cheap?
Use cases for documentation, summarization, extracting information and spinning up content aren't going to see big improvements with model advances. In other words, thousands of enterprise use cases can leverage generic LLMs. The model giants are almost confirming that they’ve hit the wall since they’re competing on personality, snark and faux empathy in LLMs.
The enterprise value is going to be delivered by the orchestration, frameworks and architecture that mix and match LLMs based on specialties.
Here's a look at some developments that point to the age of incremental for LLMs.
Zoho launches Zia LLM, which is essentially a disruptive private label LLM family. Zoho is a company that makes you really question your SaaS bill. And now Zoho is infusing its platform with its Zia LLM, AI agents and orchestration tools. Zoho said by controlling its own LLM layer it can optimize use cases, control costs and pass along savings to customers.
Raju Vegesna, Chief Evangelist at Zoho, said the company isn't initially charging for its LLM or agents until it has a better view of usage and operational costs. "If there a big operational resource needed for intensive tasks we may price it, but for now we don't know what it looks like so we're not charging for anything," he said.
Just like consumers are going to private label brands to better control costs, enterprises are going to do the same. The path of least resistance for Zoho customers will be to leverage Zia LLMs.
Amazon Web Services launches customization tools for its Nova models. Nova is a family of AWS in-house LLMs. I overheard one analyst nearly taunt AWS for having models that don't make headlines. That snark misses the point if Nova offers good enough performance, commoditizes that LLM layer and controls costs.
The Nova news was part of AWS Summit New York that focused on fundamentals, architecture and meeting enterprise customers where they are. As most of us know, the enterprise adoption curve is significantly slower than the vendor hype cycle.
AWS said Nova has 10,000 customers and the cloud provider plans to land more with optimization recipes, model distillation and customization tools to balance price and performance. Nova will also get customization on-demand pricing for inference.
With those two private label moves out of the way there's are developments out of China and Japan worth noting.
Moonshot AI, a Chinese AI startup backed by Alibaba, launched Kimi K2, a mixture of experts model that features 1 trillion total parameters and 32 billion activated parameters in a mixture-of-experts architecture. Moonshot AI is offering a foundation model for researchers and developers and a tuned version optimized for chat and AI agents.
Kimi K2 is outperforming Anthropic and OpenAI models in some benchmarks and beating DeepSeek-V3. The real kicker is that Moonshot is delivering models that cost a fraction of what US proprietary models for training and inference.
Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.
Add it up but the new age of incremental for LLMs is why I’m wary of the gap between AI euphoria and enterprise value. It’s not a slam dunk that buying the latest Nvidia GPUs, paying $100 million to AI researchers to create super teams and building massive data centers is going to yield the breakthroughs being pitched.
It appears that LLM giants know where the game is headed as they build ecosystems and complementary applications around their foundation models. For instance, Anthropic is best known for its Claude large language model (LLM), but its enterprise software ambitions are clear as the company builds out its go-to-market team. OpenAI is focusing on ChatGPT advances for headlines, but the big picture is more about AI agents.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Systems integrators and services companies are launching AI agents, releasing frameworks and trying to help enterprises build multi-agent systems. The big question is whether AI agents turn out to be a boon or a bust for systems integrators in the long run.
In recent days, we've heard from multiple systems integrators with more on tap talk about agentic AI. The extension of integrators into agentic AI makes sense given that they have the expertise to work across systems and processes. Consider:
Kyndryl, a services provider focused more on infrastructure, released the Kyndryl Agentic AI Framework, which orchestrates and dispatches AI agents that respond to shifting conditions. The framework is a way for Kyndryl to move up the stack to higher level offerings because it moves the integrator beyond infrastructure to workflows and processes.
According to Kyndryl, its Agentic AI Framework leverages algorithms, self-learning, optimizations and AI agents to run applications and processes.
Wipro said on its first quarter earnings call that enterprises are shifting discretionary funds to data and AI modernization. "AI is no longer a niche. It's becoming essential to how businesses operate at scale," said Wipro CEO Srinivas Pallia.
He added:
"Our AI capabilities are integrated into both industry and cross-industry solutions. By combining domain expertise with AI, we are able to deliver value through solutions such as hyper-personalized wealth management and predictive industrial insights. We have deployed over 200 AI-powered agents using advanced technologies from leading hyperscalers. These agents enables smarter lending, intelligent claims processing and autonomous network management."
At AWS Summit New York, there were multiple partners talking about the foundation needed for AI and agentic AI adoption. Deloitte's Chris Jangareddy, Managing Director of the company's AI, GenAI and Data Engineering, said the company will have nearly 180 agents on AWS Agent Marketplace.
According to Jangareddy, these agents are aimed at business problems, processes and specific tasks. Deloitte's AI agents are designed to be reusable Lego locks that will ultimately make up multi-agent systems. One offering is AI Advantage for CFOs that serve as a digital twin for CFOs, he said. The agents are built on Deloitte's institutional knowledge base of queries that are now prompts.
"These are not licensed, but are for clients," said Jangareddy, who noted that Deloitte is looking to transform its model from traditional billing to an outcome-based approach.
In a demo, Deloitte outlined Zora AI, which is part of an effort to produce AI agents that are product offerings. Deloitte views AI agents as digital labor that focuses on executing on processes. Zora AI is also integrated with SAP Joule.
AWS’ Brian Bohan, Director, Global Lead, Consulting Partner Center of Excellence, said during a talk that companies automating multiple business processes with agentic AI are seeing 30% to 40% productivity gains. He expects more efficiency to be unlocked.
Why? The cost of models is falling as are training and inference expenses. However, many AI projects aren't scaling due to a lack of architecture, data infrastructure and expertise. "There's just the complexity of integration," said Bohan.
Bohan added that change management, workflow optimization and the pace of innovation are all challenges. Enterprises will get to multi-agent systems across functions like finance, procurement and supply chain.
It's clear that systems integrators see AI agents as a booming business as well as a way to transform their businesses. The flip side of this transformation is that AI agents may ultimately hamper the systems integrator model.
Constellation Research analyst Holger Mueller said:
"As with any new technology, enterprises are looking at system integrators for help adopting them and AI is no difference here. The question is whether AI is so strategic that enterprises need AI skills inhouse, or can they rely on the integrator model. The experience depth is low for anyone as no one has more than two years in genAI experiences. Or more than 10 projects. It is likely going to be strategic for enterprises to have their own AI capacity and competency, especially once we move to inter-enterprise agents and the uptime and capability of frontline and backend agents determine success."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Delta Air Lines is pricing about 3% of its domestic fares with an artificial intelligence system and plans to get to 20% by the end of 2025.
Speaking on Delta second quarter earnings conference call, Delta President Glen Hauenstein gave an update on the company's plan to leverage AI-driven dynamic pricing.
"We're optimizing revenue through our partnership with Fetcherr, leveraging AI-enhanced pricing solutions. While we are still in the test phase, results are encouraging. You have to train these models and give them multiple opportunities to provide different results. We like what we see and we're continuing to roll it out, but we're going to take our time and make sure that the rollout is successful."
Hauenstein added that the more data and cases Delta feeds to Fetcherr, the more it learns and optimizes offers. If Delta gets to the 20% mark, it should be able to scale dynamic pricing at a faster clip.
On its conference call, Delta emphasized that it continues to roll out its technology including Delta Concierge, a virtual personal assistant built into the Fly Delta app launching later this year.
Delta is also using AI to optimize maintenance and resource availability.
But the biggest wins appear to be revenue optimization via partnerships with Fetcherr. In the second quarter, Delta operating revenue was up about 1% from a year ago to $15.5 billion. Hauenstein said demand stabilized late in the second quarter and business travel was solid. "During the quarter, demand trends stabilized at levels that are flat to last year. Our teams did a great job optimizing revenue performance in this environment by leveraging Delta's structural advantages and engaging customers beyond flight to generate a revenue premium to the rest of the industry," he said. "Diverse, high-margin revenue streams continue to show resilience, growing mid-single digits year-over-year and driving double-digit operating margins. Premium revenue grew 5% over the prior year, outpacing main cabin."
The plan for Delta is to expand profit margins on multiple fronts. The company restored its financial guidance that it cut in the first quarter with earnings of $5.25 a share to $6.25 a share and free cash flow of $3 billion to $4 billion.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
CxOs are being barraged with constant change where AI time frames are compressed to days before there's a new development. The breakneck pace can freeze enterprise technology buyers since they can't spend on every new development, need to show returns and don’t want AI tech debt.
At AWS Summit New York, the focus was putting the fundamental approaches in place to give enterprises the structure to adopt AI agents.
Angie Ruan, CTO Capital Access Platforms division at the Nasdaq, summed up the current AI situation. "Technology used to operate over a decade. If you weren't upgrading something in five years you were behind. Later it became 18 months. Last year it was six months. Today my mindset is you have five days before you don't know what's going on and you're behind," said Ruan. "I've have never seen a pace as fast as AI."
Ruan added that there's a balancing act. "Stay calm, be strategic and be agile so you can be ready to pivot and take very practical delivery steps," she said.
Practical real-world returns for AI projects--generative, agentic and everything in between--was a recurring theme at AWS Summit New York. AWS rolled out a bevy of updates and features including Amazon Bedrock Agent Core, customizable Nova models and lots of talk about frameworks for reliability, security, observability and agility.
In the end, Swami Sivasubramanian, AWS VP of Agentic AI, used his keynote to return to enterprise fundamentals. Sivasubramanian's talk in New York had a lot to do with the balance of innovation and foundational approaches that can change models and underlying technologies. To AWS, a strong foundation and approach enable and accelerate innovation rather than constrain it.
For enterprises, a focus on fundamentals was long overdue. Do you really expect a business to swap out an LLM every time there’s a latest and greatest model that scores 0.06% better on math, coding or reasoning?
“AI agents are a tectonic change. They are a shift in how software is deployed and operated, and how software interacts with the world. Making that possible involves building on the foundations of today. In fact, in a world with agents, the foundation has become more important than ever. Things like security, right sizing, access, control and permissions and data foundations enable the right data to be used at the right time with the infrastructure that offers the right price performance,” said Sivasubramanian.
It's a message that enterprises were receptive to. “We're realistic about what AI can and cannot do. This isn't the silver bullet, but that's true of all AI systems. The value comes when you pair it with strong engineering practices,” said Matt Dimich, VP Platform Enablement at Thomson Reuters.
AWS is setting itself up for AI agent production systems where stability matters and models for most use cases are good enough to last a while. As agentic AI becomes more enterprise ready, basics such as identity, authentication and stability matter.
Constellation Research analyst Holger Mueller said:
“Amazon is making its step into the agent platform business with Agent Core. The good news for Amazon and its customers is that the traditional small, atomic services approach that comes from the AWS DNA, may be exactly the right thing for enterprises to build their first agents. AWS is enabling AI agents in a modular, individual and use case driven way - picking from Agent Core what they need. Adoption in the next few months will be interesting to watch. On the infrastructure side, the S3 vectors announcement is huge, as it makes digital assets stored in S3 available for AI.”
The AI ROI mismatch
Rohit Prasad, SVP and Head Scientist for AGI at Amazon, said enterprises have been struggling with an expectations gap between AI deployments and real returns.
"As exciting as AI is today, ultimately the real world is the real benchmark," said Prasad. "You hear about these models that come out every day. If you're an enterprise CIO you're thinking about the practical applications. How do I make real world applications happen at scale?"
Prasad said the focus on AGI, a topic that borders on obsession in the AI industry, is often a misdirected. "I want to level set on AGI. I think the whole conversation about who gets to AGI first or whether you can get to it is meaningless," said Prasad, noting that Amazon is chasing AGI and building out a full layer stack. "I don't think there will be a switch when we are AGI. Let's focus on whether we can make AI useful in real life. And can we make the complex simple?
AWS announced the ability to customize its Nova models for enterprise use cases. AWS will provide optimization recipes, model distillation and customization to balance cost and performance.
Prasad noted that every enterprise needs to think about AI returns in terms of workflows and processes. "It comes down to measurement. You can only improve on things you measure. Look at the success criteria for every workflow."
He added that metrics can't be stationary because your organization constantly changes. "Just go with very open eyes that in lot of applications at scale, what you measure, what you on a daily basis, also needs to evolve over certain time period," said Prasad.
In terms of AI agent value, measurement will be critical. Prasad said:
"The bar to evaluate the agent should be the same as the bar that is used to evaluate a human from a perspective of safety reliability. I think it's the same thing you want in a reliable human being. I want you to be reliable, which means it's a function of accuracy and consistency and robustness to the environment. If you want to be safe, you should have the values that you want your brand to be about, what your values to be humanity and the society is. So AI agents should be held to the same bar."
Measuring AI value
Erin Kraemer, Senior Principal Technical Product Manager at AWS Agentic AI, said that AI has the potential to fundamentally change how value is delivered.
The problem? Most companies don't properly measure AI impact. "One of the missteps that I'm seeing is how we're measuring AI impact right now and how we're talking about it," said Kraemer. "I'm not sure we're doing it the right way. Organizations that figure out how to thoughtfully apply AI to meaningful problems and measure success, are the ones that are going to adapt quickly and position themselves for the future."
Amazon's approach is to focus on controlled inputs and continual improvement to solve problems whether it's scaling infrastructure, managing product catalogs or admin tasks.
Key takeaways:
Focus on business outcome metrics over volume when it comes to AI, she said. Too much conversation about AI volume revolves around volume-based metrics, especially when it comes to code.
Kraemer said business outcomes trump volume. "I'm going to argue that, rather than volume, value should ultimately be our metric of success. So in my mind, volume, it's an output focus, and it's not even probably the right outcome."
Indeed, Kraemer said the stat that irks her is the commonplace 30% of code is written by AI. "The 30% number. I hate this number so very much. It's a fundamentally flawed number. It tells us very little about what's going on, our systems, our customers," said Kraemer.
Focus on the bottlenecks. She said enterprises need to see AI through business outcomes. Specifically, AWS looks to AI to address bottlenecks in processes. "If bottlenecks tend to be around human reasoning, there's a reasonably good chance that AI is a well-placed solution to that," said Kraemer.
Specifically, human bottlenecks have been an issue for Amazon throughout its history. She said:
"We love automation a lot. We like streamlined processes. We have some pretty massive, complex systems to handle those processes, but for a lot of our work, where we ultimately get stuck is in humans. Human reasoning capability has persistently been our bottleneck. It's not the worst bottleneck to have, but whether it's software upgrades, cleaning up catalog content defects in our shipping network, we either had to build very complicated and sometimes fragile systems, or we literally could not build systems that could scale through bottlenecks. What we're seeing with AI is a technology that's starting to blow by some of these bottlenecks."
Problem-specific metrics demonstrate real value. For code-related AI, Kraemer asked: "Are we fixing defects faster? Are we improving the security posture? Are we able to build things to delay our customers at a rate that we were never able to do before."
Amazon is looking at AI through a customer experience too. Here's a look at specific metrics AWS is using to gauge AI returns.
Software development:
Defect resolution speed.
Development velocity.
Infrastructure cost savings. AWS saved "roughly $260 million in AI-assisted Java upgrades," said Kraemer.
Developer time savings. AWS saved an estimated 4,500 developer years of effort on Java upgrades.
Customer experience:
Catalog quality improvements.
Contact per order decreases.
Customer satisfaction.
Knowledge work:
Time saved using AI to answer more than 1 million internal developer questions.
Research time and data to decision time.
Amazon's approach to AI internally
A panel representing technology leaders from various Amazon units--Amazon Ads, Alexa, technology infrastructure and other areas--talked about AI being integrated into their products and metrics for success.
Here are a few examples:
Amazon Connect uses genAI to enhance customer engagement and automation with data context as well as entity resolution.
AI is generating images and video for Amazon Ads and its AI services.
Amazon Business is using AI to automate business verification, improve accuracy and reduce manual review time. Search relevance and bulk buying reviews are also designed to improve procurement experience for Amazon Business customers.
AWS Marketplace is using AI for seller onboarding and funding approvals and offering a comparison engine for product insights.
Alexa is getting a rebuild for more natural interaction and agentic AI actions.
The metrics for these projects revolve around cost, friction elimination and customer experience. As you deploy these key performance indicators and metrics, keep an experiment-based mindset focusing on customer needs and iterate.
Lak Palani, Senior Manager, Product Management Tech at Amazon Business said:
"My recommendation is straightforward. Don't use AI just for the sake of using it. Find the right business cases where AI will really add value. Start small, measure results and remember it's an iterative process. Then you can scale success. Stay super focused on the business value and customer experience."
There’s a method to AWS' meat-and-potatoes focus on agentic AI and fundamentals: Enterprise adoption of AI agents will trail the technology advances and vendor marketing speak. AWS is meeting customers where they are right now.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Zoho has launched its own large language model called Zia LLM, 40 pre-built Zia Agents, a no-code agent builder with Zia Agent Studio and a model context protocol (MCP) server that will connect its AI actions with third-party agents. The combination means Zoho is looking to democratize and differentiate with an AI strategy that revolves around developing its own right-sized models, optimizing and passing on the savings to customers.
For Zoho, the series of launches fleshes out its agentic AI strategy with the aim of democratizing various use cases, workflows and automation for enterprises of all sizes.
CEO Mani Vembu said Zoho's goal was to build foundational AI internally to better provide value and an integrated approach that "allows us to bring customers around the world cutting edge toolsets at a lower cost."
Zoho's AI strategy is to prioritize privacy and value. Its generic AI models across the Zoho platform aren't trained on consumer data and don't retain customer information. the goal is to use right-sized models that don't break the bank.
Zia LLM was trained and built entirely in India using Nvidia's platform. The foundational model was trained with Zoho product use cases in mind and can handle structured data extraction, summarization, RAG and code generation.
In addition, Zia LLM is family of three models with 1.3 billion, 2.6 billion and 7 billion parameters and competitive performance against comparable open source models. Zoho plans to mix and match models for the right context and power to performance balance.
Zoho also announced two Automatic Speech Recognition (ASR) models for both English and Hindi that's optimized for low compute resources. Zoho plans to support more languages in the future.
According to Zoho, it will still support multiple LLM integrations on its platform including OpenAI's ChatGPT, Llama and DeepSeek bur reckons Zia LLM will feature a better privacy profile since customer data will remain on Zoho servers. Part of the cost equation for Zoho customers will be leveraging Zia LLM and AI agents without sending data to cloud providers.
Raju Vegesna, Chief Evangelist at Zoho, said the company isn't initially charging for its LLM or agents until it has a better view of usage and operational costs. "If there a big operational resource needed for intensive tasks we may price it, but for now we don't know what it looks like so we're not charging for anything," he said.
So far, Zia LLM has been deployed in Zoho data centers in the US, India and Europe. The model is being tested for internal use cases across Zoho's app and service portfolio. Zoho said Zia LLM will be available in the months ahead and feature regular updates to increase parameter sizes by the end of 2025.
Zoho said it is also planning to launch a reasoning language model (RLM).
“It’s good to see Zoho charting it's unique course into the AI era and is now adding its in-house Zia models,” said Constellation Research analyst Holger Mueller. “With its focus on privacy and cost effectiveness in-house built LLMs are the right strategy for Zoho. Now Zoho has to show that it can keep up with the LLM competition.”
Why build your own LLM? B2B models are different
Zoho decided to build its own LLM from scratch for multiple reasons:
Investing into its own LLM would give Zoho downstream effects that would improve its platform and enable new features.
Zoho already was having success with dozens of AI models that weren't LLM-based.
The company wanted control of the LLM layer since it would be a core part of the platform and the company would need to continually tweak. "We don't like black boxes," said Vegesna.
Cost to performance is critical for enterprises and B2B software providers. By developing its own LLM, Zoho doesn't have to pass on additional costs to customers.
Although Zoho started LLM development within the last two years, two developments accelerated the pace. First, Zoho partnered with Nvidia. "B2B models are different than B2C and part of the technical partnership was about knowledge sharing," said Vegesna, who said Nvidia was more experienced with B2C. "With B2B, you don't worry about broader concepts as much. You narrow things down because you don't need the biggest model for every single tasks."
Vegesna said open source models such as Llama and DeepSeek, both supported by Zoho, also provided insights that improved development after development had started. Zia LLM was started before open source options appeared.
Zoho also had insights on how to develop Zia LLM from its own visibility into how APIs were used. Narrow models and non-LLMs were often used. Vegesna said Zoho is focused on using the right model for the right costs and optimizing for workflows.
"For the majority of use cases, narrow to smaller models will do the job," said Vegesna.
Having observability into its own platform enabled Zoho to optimize Zia LLM for the most common scenarios. That optimization should keep costs low. Vegesna said Zoho will continue to enable customers to use third party LLMs and the company hosts top open source models such as Llama, DeepSeek and Alibaba's Qwen.
"The customer will decide on the models used and Zoho LLMs will be an option," said Vegesna. "We have customers that don't want to rely on third party LLMs and we saw many of them taking open source models and optimizing them for their environments. Now we have that core technology, we can play the long game."
The agentic AI play
Zoho's strategy for AI agents is to offer dozens of prebuilt agents that can perform actions based on enterprise roles such as sales development, customer support and account management. The company's 40 prebuilt agents will be native in Zoho Marketplace and available for quick deployment in Zoho apps.
Zia Agents can be used within a Zoho app, across the company's stack of 55 applications or customized to specific use cases.
A few of the prebuilt Zia Agents include:
A new version of Ask Zia, which is a conversational assistant for data engineers, analysts and data scientists, but can democratize information for business users. Ask Zia is set up to address pain points faced by each persona.
Customer Service Agent, which processes incoming customer requests, understand context and answer directly or offload to a human. This agent will be integrated into Zoho Desk.
Deal Analyzer, which provides insights on win probability and next-best actions.
Revenue Growth Specialist, which looks for opportunities to upsell and cross-sell existing customers.
Candidate Screener, which identifies candidates for job openings based on role, skills, experience and other attributes.
Ask Zia agents for finance teams and customer support teams will be added.
Building and connecting agents
A big part of Zoho's AI agent plan is Zia Agent Studio, which was announced earlier this year, but has been revamped to be fully prompt-based with an option for low code.
Zia Agent Studio can build agents that can be deployed autonomously, triggered with rule-based automation or called into customer conversations.
Zoho is betting that its ecosystem of 130 million users, 55 apps and its own developers can fuel the Agent Marketplace to cover multiple use cases. Agent Marketplace now has a dedicated section for AI agents.
The company said its MCP server is designed to work across multiple applications and runs natively in Zia Agent Studio. Zoho has a library of actions from more than 15 Zoho applications exposed in early actions.
Zoho said Zia Agent will be assigned a unique ID and mapped as a digital employee so enterprises can analyze and audit performance, analysis and workflows with guardrails.
According to Zoho, Agent2Agent (A2A) protocol support will be added to enable collaboration with agents on other platforms.
General availability for Zia LLM will be at the end of 2025. Zia Agents, Zia Agent Studio, Agent Marketplace and Zoho MCP Server are being rolled out to early access customers with general availability at the end of the year.
Going forward, Zoho outlined the following roadmap:
Scale Zia LLM model sizes with parameter increases through 2025.
Expand available languages used by the speech-to-text models.
Introduce a reasoning language model (RLM).
Add skills to Ask Zia with a focus on finance teams and customer support teams.
Vice President & Principal Analyst
Constellation Research
About Liz Miller:
Liz Miller is Vice President and Principal Analyst at Constellation, focused on the org-wide team sport known as customer experience. While covering CX as an enterprise strategy, Miller spends time zeroing in on the functional demands of Marketing and Service and the evolving role of the Chief Marketing Officer, the rise of the Chief Experience Officer, the evolution of customer engagement, and the rising requirement for a new security posture that accounts for the threat to brand trust in this age of AI. With over 30 years of marketing experience, Miller offers strategic guidance on the leadership, business transformation, and technology requirements to deliver on today’s CX strategies. She has worked with global marketing organizations to transform everything from…...
Vice President and Principal Analyst
Constellation Research
Holger Mueller is VP and Principal Analyst for Constellation Research for the fundamental enablers of the cloud, IaaS, PaaS and next generation Applications, with forays up the tech stack into BigData and Analytics, HR Tech, and sometimes SaaS. Holger provides strategy and counsel to key clients, including Chief Information Officers, Chief Technology Officers, Chief Product Officers, Chief HR Officers, investment analysts, venture capitalists, sell-side firms, and technology buyers.
Coverage Areas:
Future of Work
Tech Optimization & Innovation
Background:
Before joining Constellation Research, Mueller was VP of Products for NorthgateArinso, a KKR company. There, he led the transformation of products to the cloud and laid the foundation for new Business Process as a…...
New ConstellationTV drop! 👀 In episode 109, co-hosts Liz Miller and Holger Mueller unpack summer's tech news landscape, including HPE's evolution and the intersection of #AI, networking, and #cloud technologies...
Next, Holger explains the emerging AI protocol standards reshaping inter-agent communication. Learn how these frameworks prevent vendor lock-in and create more interoperable AI ecosystems. 🤔
Wrap it up with a CR #CX Convo with Adobe's Shelly Chiang about AI transforming Digital Asset Management (DAM) from a storage tool to an intelligent, strategic #content engine. Discover how modern DAM supports creativity, brand consistency, and global scalability. 🌎
Watch the full episode & subscribe to never miss a technology update!
00:00 - Meet the Hosts
01:17 - Enterprise Tech News
11:44 - AI Standards Discussion
16:55 - CR CX Convo with Shelly Chiang
30:23 - Bloopers!
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Intuit's Chief Data Officer Ashok Srivastava Ph. D said the company is now deploying AI agents across its platform, GenOS and products.
Srivastava walked through AI agents deployed on Intuit, which is built on the AWS stack. "Two weeks ago, we formally launched our agent experiences," said Srivastava, speaking during the AWS Summit New York keynote.
We've detailed Intuit's data and generative AI journey. The company has been able to ride inflection points by getting its data architecture right and then leveraging AI. Now Intuit is looking for AI agents.
Srivastava kept with the practical AI theme at AWS Summit New York and said that enterprises shouldn't "become enamored with technology" and focus on business goals and outcomes.
He said:
"Don't get it out into the technology. Use AI only where it's necessary and use rules. Measure your ROI return on investment. Make progress and empower small teams to invest."
A few takeaways from Srivastava:
Intuit is focused on providing human experts because it will complement what AI offerings it offers.
The GenOS runtime, Intuit's operating system, is designed to orchestrate agents and models.
The interface with software will be conversational.
Agents are already driving returns and cash flow improvements for small business customers on Intuit.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Amazon Web Services launched Amazon Bedrock Agent Core, a set of tools designed to deploy and operate AI agents at scale. Agent Core includes a secure serverless runtime, access to tools and support for open-source frameworks.
In the big picture, AWS is aiming to be the best place to build and run AI agents that can carry out tasks with minimal human involvement. AWS is also looking to give enterprise customers tools that can give them stability in a rapidly changing AI environment.
During a keynote at AWS Summit New York, Swami Sivasubramanian, AWS VP of Agentic AI, laid out the cloud provider's approach to agentic AI. The four pillars to AWS' agentic AI strategy revolve around embracing agility, ensuring security and trust, reliability and scalability and observability.
Those pillars will be critical for enterprises given that AI agents are software systems that feature foundational models, complete tasks, take actions, plan, remember context and learn with minimal oversight. AWS' argument is that the fundamentals of building AI systems are as critical as the near weekly advances in model capabilities.
According to Sivasubramanian, the fundamental frameworks and approaches will matter even more as AI agents scale. There will be billions of AI agents working alongside humans in multiple settings and that scale will bring excitement, complexity and a bevy of concerns.
He said:
"We are focused on making our agentic AI data set accessible to every organization by combining rapidly innovation, with a strong foundation of security, reliability, and operational excellence. Our approach accelerates progress by building on proven principles."
In the end, Sivasubramanian's talk in New York had a lot to do with the balance of innovation and fundamentals as well as foundational approaches that can change models and underlying technologies. To AWS, a strong foundation and approach enable and accelerate innovation rather than constrain it.
There's also a reality check behind AWS' rather practical approach: Enterprise adoption of AI agents will trail the technology advances and vendor marketing speak.
AWS is setting itself up for AI agent production systems where stability matters and models for most use cases are good enough to last a while. As agentic AI becomes more enterprise ready, basics such as identity, authentication and stability matter.
The goal for AWS is to leverage Amazon Bedrock Agent Core and its partner ecosystem to enable enterprises to go from experiments to production with AI agents designed to run mission critical business processes. That progression is what has enterprises nervous, Sivasubramanian said.
Here's a look at Amazon Bedrock Agent Core:
Agent Core features a secure serverless runtime with session isolation. The agent runtime provides dedicated compute environments for AI agents with session, service and memory isolation leveraging AWS' Nitro abstraction layer.
Access to tools and capabilities so AI agents can execute workflows with the right permissions, context and controls.
The use of any model or open source framework.
Identity services to manage permissions of an AI agent and authenticate them.
Built-in checkpointing and recovery for interruptions.
Observability that's built in for internal and third party AI agents.
Agent Code Gateway for integration with other agents and various systems.
Early customers in private beta for Amazon Bedrock Agent Core include Autodesk, Cisco and Workday.
Other items from AWS Summit New York include:
Customization for Amazon's Nova models. AWS announced the ability to customize its Nova models for enterprise use cases in SageMaker AI. AWS will provide optimization recipes, model distillation and customization to balance cost and performance. Nova has launched eight models in 6 months.
"Over 10,000 customers are already using Amazon Nova. What really matters is that these models have real world impacts," said Rohit Prasad, SVP and Head Scientist for AGI at Amazon.
Nova will also get customization on-demand pricing for inference.
AI agent availability on AWS Marketplace. Customers will be able to buy AI agents and tools within AWS Marketplace. These agents can be acquired with standardized central billing and license management via AWS.
Sivasubramanian said the aim is to make it easy to deploy agents easily. “Now you can test and run AI agent solutions from a range of vendors, then quickly push the production and scale,” he said.
Updates to Amazon Connect and AWS Transform. Both will get specialized AI agents.