Results

Abrigo: How a ‘lift and shine’ migration to AWS set software vendor up for AI

Abrigo: How a ‘lift and shine’ migration to AWS set software vendor up for AI

When Abrigo, which provides compliance, credit risk, and lending software for financial institutions, launched a series of new capabilities to its Abrigo AI suite in September this year, the effort was more than a product launch. The additions were the payoff in product velocity as a result of a broader cloud and data transformation.

For Ravi Nemalikanti, chief product and technology officer at Abrigo, the launch of AskAbrigo, Abrigo Lending Assistant, Abrigo's anti-money laundering assistant, and Abrigo Allowance Narrative Generator highlighted how Abrigo could move much faster than it could two years ago.

"There's no way we could have been in a position to launch those products back in our previous data center-centric world," Nemalikanti said. "It was definitely a transformation around cloud, data, customer experiences, and resilience that made this possible."

Abrigo's new artificial intelligence (AI) products went from concept to production in six months, three times faster than the process previously would have taken.

Download this article as PDF

And the stakes are high. Nemalikanti said Abrigo is a critical software provider to banks with less than $20 billion in assets as well as credit unions. "If you're looking at a $15 billion bank or a $1 billion credit union, they don't have a way to stay ahead of what's happening or even keep pace," Nemalikanti explained. "They look to us as an innovation partner. We needed to transform ourselves to be able to help power the transformation for our banks and credit unions."

In keeping with the trajectory of enterprises such as Intuit and Rocket, taking advantage of the latest in AI required Abrigo to complete foundational steps in the years prior. First, there's a move to the cloud. Then there's the data transformation. And if those two foundational elements are lined up, adopting AI at scale is more feasible.

Abrigo decided to move to the cloud in 2022 and then held a bake-off between the big three hyperscalers. Abrigo, a Microsoft shop, decided to go with Amazon Web Services (AWS), in part because the company didn't want to be tied to software licenses and wanted to use open source technologies, Nemalikanti said. Abrigo, which caters to a heavily regulated industry, has noted that AWS' approach to building security into development and deployment processes was also a big factor.

Nemalikanti noted that the move to AWS was partly about cost savings, but the real win was velocity and the cultural transformation involved with operating in the cloud. He said that previously product teams would develop software and throw it over to the data center ops team. With the cloud, the approach to software development is more holistic. "Shifting to the SRE [site reliability engineering] mindset across the organization was critical to cultural change," Nemalikanti said. "Now, if you build it, you own it and run it."

Using AWS partner Cornerstone Consulting Group, Abrigo moved 100% of its workloads to AWS in 13 months.

"Lift and Shine"

Speaking at AWS re:Invent 2024, Abrigo led a session walking through its cloud transformation. The pre-AWS environment was built around colocated data centers that came with $7.5 million a year in capital costs.

Here's a look at Abrigo's pre-AWS environment:

  • All software-as-a-service (SaaS) servers were hosted out of two geographically diverse colocated data centers. One data center served as primary, for disaster recovery, and another for internal development.
  • Abrigo had about 1,500 virtual servers with 5PB of storage. About 90% of Abrigo's infrastructure was built on Microsoft's stack including Windows, SQL Server Standard and Enterprise, IIS App Server, .NET Framework, and .NET Core.
  • The vendor had more than 50 unique hosted SaaS applications.

Jason Perlewitz, VP of Cloud Operations at Abrigo, said the company was looking to migrate to AWS quickly so it could innovate faster with AI in the future. Speaking at AWS re:Invent 2024, Perlewitz said the goal was to create a foundation for infrastructure, product, and database modernization at lower costs.

Source: Abrigo/AWS

The challenge was delivering the cloud migration in fewer than 16 months when dealing with 50 unique applications, a lack of data hygiene, tech debt, and strict downtime requirements to minimize customer impact.

"We thought in the long run we could save money by operating in the cloud," Perlewitz said. "Our infrastructure costs reductions we wanted to be at least 20%, and we thought more than that was possible once we started to operate efficiently. We also wanted to tie our cloud spend to the growth of our business."

Other goals for the cloud migration included:

  • Reducing incident resolution time by at least 20%
  • Reducing product deployment time by 25%, with a 30% increase in deployment frequency
  • Planning ahead for enduring impact. Abrigo spent the first three months setting up architecture, defining data-tagging strategy, and upskilling teams.

"We wanted to free up our smart people to do smart things. We want to innovate," Perlewitz said. "That's where we get value. We want to see time spent on growth activities."

To meet those cloud migration priorities, Perlewitz said, Abrigo deployed AWS Professional Services to build fit-for-purpose landing zones and security architecture and invested in training.

Overall, Perlewitz said Abrigo didn't want to simply migrate but wanted instead to take a "lift and shine" approach that included copying existing virtual machines with the AWS Application Migration Service (MGN), making small changes with outsized benefits, and cutting unnecessary environments and data. AWS Managed Services was used for additional operational support.

Abrigo said lift and shine included the following moves:

  • Consolidating Windows versions before migration
  • Eliminating environments and data that wasn't needed
  • Syncing data stores
  • Standardizing engineering tasks
  • Consolidating disaster recovery instances

Perlewitz said training was a big part of the migration mix. "We wanted to equip our teams to be functionally literate in the cloud and improve our own internal capabilities," he said. "We want to innovate and adopt new technologies more quickly."

Abrigo hit its goals for the migration and then some. Here's a look (all figures compared with the year prior):

  • The migration was completed in 13 months--three months ahead of schedule.
  • Mean time to recover has decreased 63%.
  • Customer instance incidents have fallen 72%.
  • Cost of infrastructure operates at 3.65% of Abrigo's recurring revenue, down from 5% when the company operated its own data centers.
  • Application performance improved 15% to 30% on average.
  • Time to market for Abrigo's cloud applications is 70% faster than before.
  • Technical debt was reduced by 50%.

Source: Abrigo/AWS

Ongoing Optimization

Phil Schoon, senior software architect at Abrigo, said the cloud migration provided many more options for application development as well as optimization challenges.

Schoon said Abrigo developers were excited about the various services from AWS that were now at their disposal. The catch is that those services can add up. It's very easy to move a monolithic architecture and deploy it, but as it grows it starts to get expensive," Schoon said.

For starters, Schoon explained, Abrigo prioritized working on areas that weren't directly tied to features that affected customers. In addition, Abrigo's team needed to figure out how to use AWS services and then get better at using them.

Schoon said a big focus for Abrigo is container efficiency, where applications were simplified with partner Cornerstone and AWS.

Nayan Karumuri, senior solutions architect at AWS, said at re:Invent that it's common for customers to need to optimize after a migration. "The initial challenge is that there's a bubble cost in the beginning, and that's mainly due to resource inefficiencies," Karumuri said. "When you're looking at 1,500 applications migrating to the cloud, some instances were over-provisioned to avoid performance degradations and provide a good user experience."

Karumuri said Abrigo switched to autoscaling instances and reserved capacity models. The ability to right-size services also required a learning curve.

Here's a look at some of the optimization changes:

  • .NET applications were moved from Windows to Linux environments.
  • Red Hat Enterprise Linux was transitioned to Amazon Linux for native integration with cloud-native services and the ability to use spot instances wherever possible.
  • Compute instances were right-sized, with more instances moved to AWS' custom Graviton chip.
  • Amazon CloudWatch was used to monitor and trigger AWS Lambda functions.
  • AWS Cost Optimizer was also used to manage ongoing costs.
  • Abrigo moved commercial databases to AWS where possible.

Source: Abrigo/AWS

By the time Abrigo outlined the project at re:Invent, the company's optimization efforts yielded the following:

  • $1 million in disaster recovery savings due to a reduced EC2 footprint
  • $1.3 million in savings from modernizing databases to Aurora PostgreSQL
  • 80% Babelfish development cost savings
  • $140,000 in cost savings from right-sizing EC2 instances
  • $250,000 in savings for optimizing storage
  • 30% processor performance uplift

That list isn't everything, but it gave Abrigo a good base to move forward. Nemalikanti noted that the optimization continues on an ongoing basis.

What's Next?

Nemalikanti (right) said everything from application performance (up 20% to 30% on average) to product release cadence and reporting has been sped up with AWS.

According to Nemalikanti, Abrigo's AI strategy is to bring agentic AI features to customers and give them secure access to the latest models.

"Most of our customers don't have access to multiple foundational models, and there's some trepidation," Nemalikanti said. "What we've done is extend the trust our customers have in us to AI."

Abrigo is also looking to solve for the most critical use cases within smaller banks. For instance, AskAbrigo can pull from multiple policy documents to give tellers the ability to make decisions quickly on questions about cashing a check with a temporary ID or another issue. "We can show them the source so there are no hallucinations," Nemalikanti said.

Using AWS, Abrigo has set customer banks up with their own instances and data store. As for the models, Abrigo picks a variety of models that are best for a specific use case, including Amazon Nova and Anthropic's Claude. "Our AI strategy is simple: Take the five most critical things that matter to customers and launch solutions at a high velocity. We know where the productivity for our customers is lost, and we're embedding AI in exactly those areas," Nemalikanti said.

Nemalikanti's team does thousands of interviews with customers each year, and those interviews will determine where Abrigo uses AI. He said Abrigo plans to leverage agentic AI, but that it doesn't work for every use case especially when there's a deterministic workflow. "We do think there are real opportunities, but we're just not going to follow the hype," Nemalikanti said. "We will look at real business processes holistically, such as loan origination, documentation reviews, and underwriting."

Data to Decisions Next-Generation Customer Experience Tech Optimization amazon Chief Executive Officer Chief Information Officer

Snowflake launches Cortex AI for Financial Services, MCP Server

Snowflake launches Cortex AI for Financial Services, MCP Server

Snowflake launched Snowflake Cortex AI for Financial Services, a suite designed to connect AI models to financial data, apps and data via model context protocol (MCP).

The move highlights how industries are increasingly building enterprise AI plans around proprietary and industry specific data.

Snowflake said the linchpin of the financial services offering is the company's new MCP Server, which connects data from the likes of MSCI, Nasdaq, AP and eVestment with agents built on Anthropic, CrewAI, Cursor, Cognition and Windsurf.

The company said Snowflake MCP Server is in public preview. Snowflake MCP Server can connect to platform tools such as Cortex Analyst and Cortex Search as well as third party external tools and data.

Snowflake said MCP Server can connect to Anthropic, Augment Code, Amazon Bedrock AgentCore, CrewAI, Cursor, Devin by Cognition, Glean, Mistral, UiPath, Windsurf, Workday, and Writer.

The data platform has been focusing on financial services data for years. In 2021, BlackRock and Snowflake partnered on Aladdin Data Cloud.

Key points about Cortex AI for Financial Services:

  • Cortex AI for Financial Services connects a bevy of data sources via Cortex Knowlege Extensions to round out market analysis, research, business content and news.
  • Machine learning workflows for risk modeling, forecasting, analytics and compliance are available in Cortex AI for Financial Services. Snowflake Data Science Agent can clean data, engineer feature and prototype and validate models.
  • Unstructured data analysis is available via Cortex AISQL to pull insights from documents and images.
Data to Decisions snowflake Chief Financial Officer Chief Information Officer

Conga will buy PROS B2B unit

Conga will buy PROS B2B unit

Conga said it will acquire the B2B business of PROS Holdings, which is being acquired by Thoma Bravo. Congo is also a Thoma Bravo portfolio company.

Thoma Bravo said last month it will buy PROS for $1.4 billion.

The move for Conga to acquire PROS B2B business resolves one of the big questions of the deal. Thoma Bravo owns Conga and PROS and the two companies compete in certain areas. Terms of the PROS B2B deal weren't disclosed.

When Thoma Bravo closes the PROS deal the B2B business will move to Conga. The acquisitions should be complete in the first quarter of 2026. With the combination of PROS B2B and Conga, the plan is to offer a complete suite for revenue management and configure, price, quote (CPQ) as well as contract lifecycle management.

Conga CEO Dave Osborne said the addition of PROS B2B will mean enterprise won't have to "stitch together multiple point solutions across their revenue lifecycle." Osborne will remain as CEO of Conga after the deal closes.

The combined company plans to use AI to automate pricing, quoting and contracting to optimize revenue processes, drive insights and execute post quote.

 

 

Data to Decisions Marketing Transformation Next-Generation Customer Experience Chief Revenue Officer

UiPath adds agentic AI features to automation platform, expands partnerships

UiPath adds agentic AI features to automation platform, expands partnerships

UiPath expanded its UiPath Platform, which is aimed at agentic AI automation and orchestration, and lined up a bevy of partners including OpenAI, Google, Microsoft, Nvidia and Snowflake as it solidified its integration strategy.

The moves by UiPath highlight how AI agents, process automation and automation are starting to converge.

At UiPath's Fusion conference, the company outlined a series of additions to its platform. UiPath announced the following:

  • UiPath Maestro Case Management with pre-built orchestration for claims, loans and disputes for modeling, management and optimization.
  • New UiPath Maestro Process Apps designed for new processes for multiple industries.
  • UiPath Solutions, which combines agents, workflow automation and orchestration. UiPath Solutions include end-to-end processes for financial services, healthcare, customer service and retail.
  • UiPath Studio gets UiPath Agents, which integrates agents across development. UiPath's AI Agent Builder has a new visual UI for debugging, optimization and reusable templates. UiPath's conversational agents extend into multiple collaboration apps.
  • The company also added new features to UiPath IXP document processing and additions to UiPath Test Cloud.

UiPath's big news revolved around its partnerships and integration with data platforms and models. UiPath said it will integrate its platform with OpenAI ChatGPT via a connector that integrates frontier models with workflows.

Key points about the UiPath-OpenAI partnership:

  • OpenAI models and APIs will be integrated into UiPath's enterprise orchestration tools.
  • The companies will create a benchmark for using models in agentic automation to evaluate multiple offerings.
  • UiPath Maestro will orchestrate UiPath, OpenAI and third party AI agents in business processes via large action models.
  • UiPath will be integrated with ChatGPT via model context protocol (MCP).

UiPath also said its Conversational Agent with voice interaction will be powered by Google Gemini models. The move puts Gemini into business processes without coding and manual efforts.

According to UiPath, customers will be able to leverage Google Cloud Vertex AI to trigger, build and manage automation through natural language.

The company also announced a partnership with Nvidia. Key details include:

  • UiPath will include Nvidia Nemotron models and Nvidia NIM into its platform and integrate via connectors.
  • The companies will look to broaden agent orchestration and usage for Nvidia Nemotron models.

The partnership with Snowflake will combine UiPath's automation platform with Snowflake Cortex AI. The combination puts together AI agent orchestration and UiPath Maestro with Snowflake's data platform.

Snowflake's Cortex Agents will be integrated into UiPath so enterprise can leverage data and build agents for workflows.

UiPath also announced a deal to integrate into Microsoft AI Foundry in a move that will bring its orchestration platform to Microsoft customers across multiple industries. Via MCP, UiPath agents will have bi-directional integrations with Microsoft Copilot and CoPilot studies and be able to interact with Microsoft agents and models.

 

 

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Soul of the Machine: AI proof of concepts a waste of time

Soul of the Machine: AI proof of concepts a waste of time

Sunil Karkera, Founder Soul of the Machine, is leveraging agentic AI to outpace much larger companies. "We solve boring problems and it's exciting," said Karkera.

Soul of the Machine has migrated SAP in 90 days and implemented a voice-based LLM augmented factory and production planning system in days. Karkera's services are completely agentic based with engineers doing the work up front.

"Everybody is in the US and we are forward deployed engineers. We work directly with the customers. Engineers, strategists and designers are totally vertically integrated," said Karkera, speaking at Constellation Research’s AI Forum in Washington DC.

Karkera also said AI has flattened the services model. He also doesn't believe in proofs of concepts and pilots. Prototypes can be created in that first customer meeting and can rapidly go to production. "We are using an entirely end to end AI toolchain," said Karkera. "Vibe coding is about 10% to 20% in the prototyping phase. Then it's basically deep architecture. Engineering AI is really hard because most of the work is context engineering and it's not straightforward."

More from AI Forum Washington DC:

In other words, it's hard to keep it simple. Karkera said Soul of the Machine tries to avoid multi-agent orchestration to keep tools limited. "Once you use more than three tools, it goes all over the place. Ideally, it's one tool per agent," said Karkera. "If we do multi agent orchestration we do it handmade. There's no choice at this point."

According to Karkera, enterprises are going down the wrong route with proof of concepts.

"We have a rule that we don't do any POCs. We have left money on the table by saying no to POCs, because we want to embrace the problem and do it all the way, rather than explain how hard it is. One cultural thing is to go after a full problem, segment that problem, solve it all the way and put it in production. Don't dwell on fancy problems to solve instead of the real problems with ROI."

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Google DeepMind’s Danenberg on emerging LLM trends to watch

Google DeepMind’s Danenberg on emerging LLM trends to watch

Peter Danenberg is a senior software engineer at Google's DeepMind, leads rapid prototyping for Gemini and has to think through more than a few big ideas.

Speaking at Constellation Research's AI Forum, Danenberg spoke with R "Ray" Wang about emerging trends in AI and looming questions ahead. Here's a look at the high level topics in a space that evolves almost hourly.

More from AI Forum DC: For AI agents to work, focus on business outcomes, ROI not technology

Ambient LLMs. To use LLMs today, you break out your phone or laptop and often break your flow. The future could be an ambient companion that sits there and sees what you see and hear. Danenberg said he wasn't sure where he sits on the ambient LLM spectrum, noting that it could be creepy, but there are advantages to an assistant that wouldn't break your creative flow. "It's an interesting question," he said. "There's an idea of a companion that's there and you're not aware of it until you need it."

Use cases. Danenberg said that there has been a shift in companies about how they are using foundational models from reluctance to adoption. Companies are focusing on low hanging fruit for use cases, but these add up. "Anything where you need to extract structured data from unstructured data is beautiful low hanging fruit you can get started with," he said.

Constellations of smaller models emerge. Danenberg said one trend to note is that there are startups focused on smaller models that do one thing well and then become parts of constellations of LLMs that solve problems.

Don't forget the classics. Danenberg said there's a renaissance in thinking in AI that's "going back to class ML (machine learning." The trend is still developing, but researchers are rediscovering 1960s AI, symbolic reasoning and ontologies. In this world, "LLMs are just becoming a universal interface over small models and classic ML," said Danenberg. "I wonder if, to a certain extent, that the LLM sweet spot is really as a user interface of these classical models that can achieve something with 100% accuracy with its own specific event. That's going to be interesting idea."

The importance of 10,000 hours. The effect of LLMs on human intelligence is an ongoing debate and concern. Danenberg said one impact to ponder is the 10,000 hours rule. Humans put in 10,000 hours into something, gain domain knowledge and expertise and then develop a bullshit detector to distinguish between fact and fiction. "The big question is that in the age of LLMs are we still going to be able to put in the 10,000 hours to develop these reality detection systems?," said Danenberg. "Going forward, that's going to be an interesting question in terms of the generation coming of age."

Virality of Nana Banana, Google's AI image editor. Danenberg said the combination of Gemini 2.5 and Nana Banana led to a viral moment for Google that was unpredictable. "With this virality thing, you can't force it, but I am just glad we had a moment," said Danenberg.

Data to Decisions Chief Information Officer

For AI agents to work, focus on business outcomes, ROI not technology

For AI agents to work, focus on business outcomes, ROI not technology

The IT department is where agentic AI goes to die--or at least never make it out of proof of concept. Agentic AI needs to be driven by outcomes, returns and benefits to the business instead of technology.

That's a big takeaway from Boomi's Chris Hallenbeck, SVP and GM of AI & Platform

Boomi. Hallenbeck (right), speaking at Constellation Research's AI Forum in Washington DC, said that enterprise agentic AI is a work in progress and enterprises are still wrestling with defining the technology.

"Less than 5% of companies know what agentic AI is," said Hallenbeck, who said an AI agent is one that can perceive, reason and act. "Agents are more than conversations and within a corporate sense, they need access to my data, CRM, financial systems, HR and databases to proceed. To reason you have to give it oerating procedures."

In other words, process matters as does frameworks for governance, observability and security as well as audit trails. "Without those systems and guardrails nothing gets out of POC to production," said Hallenbeck.

Too often, the business impact is getting lost in agentic AI use cases as enterprises focus on technology over outcomes.

"If AI is being pushed into IT with a focus on code and libraries, those systems don't go live," said Hallenbeck. "You can get to POC, but it's not going to scale up to enterprise class. Folks are having a lot of fun, but if they're not dead focused on business impact it's not going live. How can I deploy a project from idea to actual positive business impact is important. It's completely doable, but you just can push it down to a CoE (center of excellence)."

That CoE approach raises the costs and leads to scope creep where enterprises are trying a big bang approach to AI agents, said Hallenbeck. "If you keep going after bigger and bigger projects the business is going to say it doesn't want to pay for it because the hurdle rate is huge," he said.

What's the approach that works for AI agents?

Hallenbeck cited one customer that took two days and built an agent to save $50,000. That's $50,000 saved in two days’ work. You scale and learn by saving one problem at a time.

"Enterprises are going after reducing costs and reducing risk and looking where they can have a higher impact on revenue and other places," said Hallenbeck, who cited one customer that used agents to insource a process and saved 27,000 hours.

Other takeaways from Hallenbeck:

  • Processes. Enterprises are automating processes with AI agents, but then need to think through the overall process. "What if you rethought the entire process looking at it from an agentic perspective. Redesign it," said Hallenbeck.
  • Raising the AI IQ. To drive business impact enterprises need to focus on raising the AI IQ across the workforce.
  • The next phase for agentic AI in 2026 is going to be verticalized use cases.
  • Agentic AI is still in early innings due to tooling, the need for situational awareness and standards.

More on AI agents:

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

ServiceNow launches AIx as AI becomes the platform UI

ServiceNow launches AIx as AI becomes the platform UI

ServiceNow launched AI Experience (AIx), a multimodal interface that will serve as a user interface for the company's platform. Enterprise users will be able to access ServiceNow apps, data and workflows with AI agents.

The company said AIx will be integrated throughout ServiceNow's platform. AIx will also be tailored to each user's permissions for employees, knowledge workers and developers.

ServiceNow's move comes as debate about the future of SaaS ramps. Workday recently acquired Sana to add a unified AI interface over its applications in the future. Meanwhile, enterprises are pondering using AI agents to traverse multiple applications. SaaS vendors could be disrupted if they don't become platforms or are relegated to data stores.

The other thread here is that ServiceNow is also looking to consolidate various AI agent efforts on multiple platforms. With ServiceNow's AIx, customers will be able to use voice agents to solve issues in sales, HR and IT. AIx is built on top of ServiceNow Now Assist. AIx can leverage AI agents to handle tasks in the background to resolve issues.

Amit Zavery, ServiceNow's Chief Product and Chief Operating Officer, said:

"We are at a critical crossroads in technology. Each major shift from command lines to GUI, desktops to mobile, the mouse to touch, gesture and voice, has fundamentally rewritten the rules of work. AI is definitely transforming not just what we do today, but how we do everything. And each previous shift enterprise technology also created legacy, siloed problems. That is why ServiceNow is introducing this new interface for the enterprise, which is intuitive, multimodal and action oriented."

ServiceNow executives said AIx is a shift in how people interact with technology and SaaS apps. The upshot here is that ServiceNow is looking to consolidate enterprises on its platform and breakdown data and dashboard silos.

Zavery added that ServiceNow AIx isn't just "another AI layer bolted onto the legacy tools" and "an experience which is really built for the enterprise."

A series of demos from ServiceNow highlighted the following:

  • A conversational voice interface that was able to pull data and workflows from multiple apps and data stores and carry out work.
  • The AI prompts guided workflows.
  • AI agents knew a user's role, history and preferences.
  • Adobe was used as a customer highlighting how AI agents could carry out tasks.

Key points about AIx include:

  • Customers can deploy AIx via AI Control Tower across native AI and third-party tools.
  • AIx includes AI Voice Agents, which can retrieve information, update records and troubleshoot issues, AI web agents that can work across browsers and fill out forms, AI Data Explorer, which connects data sources via Workflow Data Fabric, and AI Lens, which can see screens, forms and dashboards and take action.
  • ServiceNow CRM will be among the headliner apps for AIx. Employees will be able to use AI agents to scan tickets, flag patterns and craft response plans.
  • ServiceNow executives noted that a big theme was optionality for how AI agents work across an enterprise with multiple models. AIx's backend also discovers agents that are available via multiple agent protocols. "We pioneered workflows, we automated them, and now we're making them autonomous," said Dorit Zilbershot, Group Vice President, Product Management, AI Experiences and Innovation. "Every interaction happens with clear guardrails. People stay in control, and we give them the optionality at every level, with full visibility into how AI works and what it's doing."
  • AI Lens is generally available. AI Voice Agents, AI Web Agents and AI Data Explorer will be available at the end of 2025.

Can ServiceNow AIx expand its markets?

Amy Lokey, Chief Experience Officer, said AIx is a differentiator and AI agents bring automation to indeterministic workflows. "We think that the agentic solutions are going to be really force multipliers for productivity," said Lokey.

Zavery added that ServiceNow's AI agents are built on decades of workflow data in the company's platform.

AIx is also a big theme in ServiceNow CRM.

Terence Chesire, Vice President Product Management for CRM & Industry Workflows, said AIx will have a big impact on workflows in CRM.

"We're replacing traditional CRM with an AI native system of action, one that breaks down silos, automates workflows and uses AI to free your teams, to thrill your customers and accelerate their growth. This is purpose built for sales fulfillment service and customer success," said Cheshire.

One BT150 member said when starting a transformation from scratch, ServiceNow's platform works well. If you had a green field, ServiceNow CRM, ITSM and every other enterprise application works. The issue is most enterprises have multiple enterprise applications and platforms. What remains to be seen is whether AI experiences can collapse SaaS apps.

 

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity servicenow ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Anthropic, Microsoft, OpenAI build out AI agent use cases

Anthropic, Microsoft, OpenAI build out AI agent use cases

Large language model providers are rapidly building in agentic AI tools to handle tasks autonomously over time. Anthropic released Claude Sonnet 4.5, OpenAI is open sourcing Agentic Commerce Protocol to give ChatGPT a commerce boost and Microsoft sees Office as a way to vibe work.

Add it up and the message from LLM giants is clear: Here are our foundational models at your service.

Let's take some LLM and AI agent inventory.

Anthropic

The company released Claude Sonnet 4.5 and the LLM seems to thrive at coding, but the bigger launch in the long run may be the addition of context editing and a memory tool in the Claude Developer platform.

Claude Sonnet 4.5 will have the ability to handle long-running tasks without hitting context limits or losing information. For an AI agent tasks with a project, the ability to work on something for a longer time means developers will have options for higher level tasks.

Anthropic is pretty clear about its goals with Claude Sonnet 4.5. The company says the latest Claude is "the best model in the world for building agents."

According to Anthropic, Claude Sonnet 4.5 will be able to process codebases, analyze hundreds of documents and maintain tool interaction histories. The general idea is that the latest Claude can develop software and carry out other tasks on the fly.

Also see:

Microsoft

For its part, Microsoft is also weaving agentic AI tools into its applications. The company said it is bringing "vibe working" to Microsoft 365 Copilot with Agent Mode in Office apps and Office Agent in Copilot.

In a nutshell, Microsoft's agents will be able to create spreadsheets and docs in Excel and Word and whip up PowerPoints with just a conversation.

Agent Mode will tap into Excel natively and leverage OpenAI's latest reasoning tools.

Office Agent in Copilot chat will leverage Anthropic models to whip up Word docs and PowerPoint presentations.

Will vibe working become a thing? You know Microsoft will give it a go and the term may just stick.

OpenAI

OpenAI, which has become the more consumer-centric LLM play, launched ChatGPT Instant Checkout and the Agentic Commerce Protocol, which was developed with Stripe.

With Instant Checkout, OpenAI said ChatGPT Plus, Pro and Free users can buy from Etsy directly in a chat. The company will add Shopify merchants soon. You see where this is going. OpenAI will be a commerce engine and get a little cut as commerce via ChatGPT expands.

For the broader agentic commerce play, OpenAI said it is open-sourcing Agentic Commerce Protocol, which is the technology behind Instant Checkout. "For shoppers, it’s seamless: go from chat to checkout in just a few taps. For sellers, it’s a new way to reach hundreds of millions of people while keeping full control of their payments, systems, and customer relationships," said OpenAI in a post.

This Agentic Commerce Protocol launch will get interesting because it's one of the first time AI players have decided to compete. Google launched Agent Payments Protocol earlier this month.

Here's the workflow for OpenAI's Agentic Commerce Protocol.

 

Data to Decisions Future of Work Matrix Commerce Next-Generation Customer Experience Chief Information Officer

Enterprise AI: It's all about the proprietary data

Enterprise AI: It's all about the proprietary data

Two distinct AI markets have emerged. There's the AI infrastructure, superintelligence chase and funding fest with projections that border on fictional. And then there's enterprise AI that revolves around process, automation, specific use cases and compute that’ll be spread around.

The divergence between the two AI markets has widened throughout 2025 but is now inescapable. The AI infrastructure boom is all about a chase for consumer AI. In this week's saga, OpenAI, Softbank and Oracle launched Stargate’s flagship AI factory and other sites, OpenAI entered a deal with Nvidia for $100 billion in funding where the LLM player builds data centers (on Nvidia GPUs) and then gets paid for every gigawatt deployed. Sam Altman hinted at big plans.

In a nutshell, consumer AI is more like covering a sports story. OpenAI pledges $300 billion to buy AI infrastructure from Oracle. Oracle buys GPUs. Nvidia backstops CoreWeave with purchase guarantees if it has extra capacity. OpenAI also appears to be a big future buyer of Broadcom XPUs (which probably led to the Nvidia deal). Meanwhile, Amazon, Google and AWS all have to buy Nvidia but building their own custom chips for AI workloads.

GPUs are essentially like Pokémon cards being traded among a handful of really rich kids. The trading game works until it doesn't. The reality is that the numbers may not work. According to a report by Bain & Co., $2 trillion in annual global revenue is needed to fund the computing power needed to meet anticipated AI demand by 2030. Bain estimated that even with anticipated AI savings, the world is $800 billion in short to keep pace with demand.

Get Insights Newsletter

And then there's enterprise AI, which is slower moving but could deliver more value and result in on-prem and edge AI. Enterprise AI revolves around proprietary data sets and real returns (as opposed to the pie-in-the-sky possibly happening kind). Enterprises in multiple industries are starting to leverage their unique data and insights into new revenue streams driven by AI.

Frankly, enterprise AI can be more interesting and sustainable. Enterprise AI is what will be there when this AI bubble eventually bursts. Here's a look at a few scenes from the data and enterprise AI transformation front where thinking three- to five-years pays off big.

FedEx: The metadata about packages is more important than the actual package

Rajesh Subramaniam, CEO of FedEx, laid out the transformation that's either happening or needs to happen across every company and industry. Speaking on FedEx's first quarter earnings call, Subramaniam laid out the importance of the data ground game.

Subramaniam said that data and technology is the foundation of FedEx and "that information about the package is as important as the package itself."

"We moved 17 million packages through our network daily, generating 2 petabytes of data and 100 billion transactions across software applications," said Subramaniam. "But the real value is in the volume. It is in the unique nature of this data. Our position at the intersection of global commerce gives us an unmatched view of physical supply chain patterns, seasonal demand shifts and emerging trade corridors."

FedEx recently hired Vishal Talwar as chief digital and information officer and president of FedEx Dataworks. Talwar was the former chief growth officer at Accenture Technology and his remit at FedEx is to turn the company's physical digital assets into "next-generation AI-led capabilities."

Subramaniam said FedEx is looking to scale AI across operations and create new revenue models. For the core business, FedEx is ramping an expanded partnership with Best Buy and new Amazon business. Those logistics services will only add to the data pool.

The big takeaway from Subramaniam was that FedEx is set to reap the rewards from building out its data platform in 2020. That data platform is now "the fuel for AI," he said. FedEx has already launched its commerce platform FDX, which is used as a workflow tool and supply chain orchestrator.

"Our mission and vision has evolved to make supply chain smarter for everyone. It begins with our data platform and the insights that we have on supply chain and the role of AI and the tools that we have," said Subramaniam, who noted that FedEx will outline the data and AI strategy more in February.

Exxon: Leveraging its project data

Exxon is using AI to leverage its knowledge management knowledge that takes all the lessons learned from every project, both large and small, and store them for future use. Exxon built its database on projects well before AI.

However, generative AI was the big unlock.

Speaking at the recent a recent energy investment conference, Exxon Senior Vice President Jack Williams said: "A lot of the advantage of AI is in how good your data set is, how good is the data that AI model is learning from. And we have some great data that's built on the world's largest project database. And so, we're very optimistic that's going to make a big difference long term. It just make us that much more productive and that much better in terms of making sure that we're leveraging every single lesson, ever single bit we've learned in the past and bring that to every single project we do."

Exxon also has an internal platform that takes historical data and makes it available for AI applications. This data complements a large ERP transformation that will provide a consistent data architecture.

Speaking on Exxon's second quarter earnings call, CEO Darren Woods said cost efficiency from AI is a second priority compared to driving effectiveness. At Exxon, saving a million here and there can quickly add up to billions of dollars saved.

"A bigger value lever is frankly on the effectiveness side of the equation. We're looking better at how we can take advantage of AI to make the products that we make at much a lower cost and with much better performance parameters, find oil cheaper," said Woods. "You can just kind of go through the list of things that we do to produce the products that we make."

Intuit: AI, data investments lead to "system of intelligence"

Intuit has been ahead of emerging technology curves for nearly a decade. By building out its data platform, Intuit made a transition to generative AI. Now that Intuit has its AI model game down it's starting to leverage AI agents. Now Intuit CEO Sasan Goodarzi is looking to AI to reinvent the company's SaaS business, deliver a unified platform for business and accelerate growth.

Goodarzi, CTO Alex Balazs and other executives outlined a bullish vision for the company. As noted previously, Intuit is gunning for the midmarket enterprise and sticking with customers as their businesses grow.

"We believe that every SaaS company, anybody that makes software is either going to get disrupted or they're going to be the disruptors. And that's because of what's possible with AI," said Goodarzi. "We believe that SaaS players must become the system of intelligence, which means you have to be great at data, data models, data ingestion and AI capabilities to ultimately architect learning systems that learn from customers and deliver the experiences that they are looking for, which means business logic, workflows and the app layer will completely get disrupted."

Six years ago, Intuit bet the company on data and AI. Now the pieces are in place for Intuit become the "system of intelligence" it wants to be. In July, Intuit launched a bevy of AI agents to go along with its AI-enabled human experts. This intersection is what Intuit calls AI and HI (human intelligence).

Balazs said: "Data, data services and AI are durable advantage. And as we evolve from a system of record to a system of intelligence, our all-in-one platform and agentic capabilities fuel customer growth."

Goodarzi said that Intuit's platform can eliminate 80% of the apps customers use, collapse data silos and lower costs for them.

"We are building our systems in a way where we're disrupting business logic. Ultimately, the customer can engage with us and ask for, how do I grow my business, how do I sell wine that's over $50 versus other competitors? Which customers are most profitable? Let me give you my POS data. Can you tell me which areas I should focus on? Whatever you want to engage with the platform on, we, ultimately, our data, AI and HI capabilities deliver that experience. That is a system of intelligence. That is our strategy," said Goodarzi.

Intuit has three big bets: Creating done-for-you experiences for everything from payroll, accounting, taxes and marketing to name a few. Money and cash flow services are the other big bet. And finally the midmarket will give Intuit more growth.

The data assets are impressive. Intuit has:

  • 625,000 data points per business with insights on specific businesses and insights.
  • 70,000 data points for consumers.
  • 1,300 engineers actively building AI agents.
  • Insights on more than $1 trillion in money moved.
  • More than 15 large language models are orchestrated by Intuit's internal financial LLMs. Balazs added that Intuit takes a balanced approach in applying genAI, and traditional and classical AI techniques.
  • Intuit's GenOS enabled 9,800 model deployment events in the last fiscal year.

Intuit will outline how it's putting these parts together at Intuit Connect in October.

FICO and Equifax: From credit scoring to decisioning platform

Fair Isaac Corp. (FICO) launched its own LLMs for financial services designed to deliver accurate and auditable outcomes and trust scores.

FICO launched a series of foundational models for financial services including FICO Focused Language Model for Financial Services (FLM) and FICO Focused Sequence Model for Financial Services (FSM).

In a nutshell, FICO is taking its data and training foundational models to address specific tasks or business problems. FICO's models require up to 1,000x fewer resources than general models.

FICO CEO William Lansing said the company's investment in its data and AI platform is designed to enable enterprises to make decisions and apply intelligence across customer lifecycles. The company also inked a deal with AWS for greater adoption of its FICO Platform.

Equifax is another company that has completed a long transition to the cloud, leveraged its data sets and now is releasing new products at scale. That focus on cloud and data platforms have enabled Equifax to also leverage AI well.

CEO Mark Begor said at a recent investment conference: "The power of the AI allows us to ingest more data and we have more data than our competitors. We believe we can deliver products that are differentiated versus our competitors because I think everyone in the room knows when you use more data in this decision, you generally get a higher predictive solution. And higher predictability means ROI from our customers and it means either market share or price for Equifax."

Begor added that Equifax has proprietary data that it can only use. The differentiation in an emerging enterprise AI market is leveraging your own data. "The moat is the aggregation of the data that means only we can apply AI to it," said Begor.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Data Officer