Results

OpenAI, AMD ink big GPU deal: What it means for the rest of us

OpenAI just made AMD a viable counterweight to Nvidia for GPUs. OpenAI said it has inked a 6 gigawatt deal with AMD to provide AMD Instinct GPUs for its AI buildout.

Under the terms of the deal, OpenAI's first gigawatt deployment of AMD Instinct MI450 GPUs starts in the second half of 2026. OpenAI will also get a warrant to acquire up to 160 million shares of AMD that will vest as milestones are reached. The first tranche vests when the first gigawatt is deployed with additional vesting as OpenAI builds to 6 gigawatts.

OpenAI also said the there is more vesting tied to AMD stock price targets and the ability to deploy Instinct GPUs at scale.

The OpenAI-AMD deal is different than the OpenAI-Nvidia deal. Nvidia invested in OpenAI to help fund purchases of GPUs and AI infrastructure. The OpenAI-AMD deal doesn't provide cash up front, but aligns interest.

AMD CEO Dr. Lisa Su said the partnership will create "a true win-win enabling the world’s most ambitious AI buildout and advancing the entire AI ecosystem." OpenAI CEO Sam Altman said the AMD deal gives it the ability to accelerate its plans.

Constellation Research analyst Holger Mueller said:

"OpenAI is signing deals left and right. The AMD deal is different, as it is the first explicit and only inference deal, as well as equity deal the AI vendor has stuck. It's a big win for AMD that could not get to scale in the data center for AI yet, the big question is now which data center vendor will get the workload. The equity aspect is also interesting. AMD is giving up a lot here for an initial deal. It is also clear that the OpenAI leadership is scared from compute capacity challenges and wants to avoid them at all cost. The concern is that it's unclear how will OpenAI be able to pay for all the performance obligations. We'll worry about tomorrow when it's tomorrow."

Mueller isn't kidding about the questions about payment. OpenAI's deals of late via Goldman Sachs include:

That spending is against OpenAI's 2025 revenue target of $13 billion, according to The Information.

Clearly, the OpenAI deal is huge for AMD, which now has solidified itself as a viable second option for GPUs. The OpenAI-AMD deal is also big for anyone procuring AI compute.

Here's why:

  • Nvidia has had a lock on the AI infrastructure market and those nice profit margins are being funded by IT buyers. Enterprises have been waiting for two years to see Nvidia competition.
  • As more AI infrastructure is deployed on-premises and at the edge, AMD will be a natural option for enterprises.
  • AMD will land more tier-1 customers and cloud instances.
  • The constrained GPU market will be less constrained with AMD and cloud hyperscalers' custom chips adding competition.
  • AMD is likely to land more deals with AI-centric cloud providers.
  • AMD's ROCm platform will be more viable against the Nvidia software stack, which is where the lock-in will really occur.
  • Nvidia still has the installed base and dominance, but will arguably face its first real competition in the market.

 

Data to Decisions Tech Optimization Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

AI Forum Washington, DC 2025: Everything we learned

Constellation Research’s AI Forum in Washington DC featured 19 sessions, AI thought leaders and practitioners and a community drinking from a firehose.

Here’s a look at the takeaways.

AI as a 1960s-ish moonshot?

As you would expect at an AI Forum held in Washington DC, there was a good bit of talk about AI as a battle between the West and China and the need for more power and less regulation.

Key points from the sessions:

  • Data centers are becoming critical national infrastructure, with AI workloads expected to consume one-third of capacity. The global race for compute capacity requires 300 gigawatts in 4.5 years, with power demand doubling every 100 days. Countries need comprehensive digital transformation strategies to remain competitive.
  • AI is viewed as "a new industrial base for the United States" that will determine whether America maintains global leadership or surrenders it. Speakers emphasized that leadership in AI is "never permanent" and the pace has gone "supersonic," making this a defense perimeter issue rather than just an economic opportunity
  • The fragmented regulatory approach in the US contrasts with Europe's more restrictive AI Act, which is driving companies to relocate operations. The need for balanced policies that encourage innovation while providing appropriate guardrails remains a central challenge.
  • Multiple panelists agreed that AI was going to affect jobs. These panelists also agreed that no government has an answer for the job losses.

Pragmatic use cases

David Giambruno, CEO of Nucleaus, has seen his share of technology transformations. We covered Giambruno's approach to cutting IT costs last year.

To roll out real AI use cases, you'll have to speak to value first, said Giambruno, who also said you have to figure out how you're going to build and on what platform.

"How you build matters both in cost, speed to value, and how much glass you want to chew," he said. In other words, pick one platform and operating model and run.

Once that platform is picked give developers a safe place to experiment and see what's possible.

Mukund Gopalan, Global Chief Data Officer at Ingram Micro, said every use case for AI needs to have "a clear line of sight to the top line or bottom line."

Gopalan said every use case is different, but the guiding principle is that they need to save costs, drive revenue or save time.

Scott Gnau, Vice President of Data Platforms at Intersystems, cited a use cases that did all three with ambient listening AI that plugged into the workflow of electronic health records. "A physician could have a conversation, look a patient in the eye and have everything captured and get a list of recommendations from an AI agent," said Gnau. "This use case takes an existing process and makes it fully optimized yet human."

Use cases that turned up on a panel:

  • Data cleansing and finding out where sensitive data resides.
  • Data engineering.
  • Pull logic out of stored procedures.
  • Transforming legacy applications with AI.
  • Knowledge management applications.

Nilanjan Sengupta, SVP / Industry Market Director, Public Sector and Healthcare, Americas at Thoughtworks, said the software development lifecycle is a clear use case for AI agents. "The main trend we're seeing is legacy modernization across the entire enterprise," he said.

Anand Iyer, Chief AI Officer at Welldoc, said his company has created a large sensor model that takes sensor data can uses it to predict glucose values in the hours ahead. "When we think about where healthcare is headed, a lot of us are trying to get to the prevention piece," said Iyer.

Peter Danenberg, a senior software engineer at Google's DeepMind who leads rapid prototyping for Gemini, said enterprises have been expanding the use case roster. Danenberg said that there has been a shift in companies about how they are using foundational models from reluctance to adoption. Companies are focusing on low hanging fruit for use cases, but these add up. "Anything where you need to extract structured data from unstructured data is beautiful low hanging fruit you can get started with," he said.

Proofs of concepts are panned widely

If there was a punching bag at AI Forum Washington DC it was the proof of concept.

CxOs repeatedly panned POCs because they were an excuse for not doing the work upfront, sucking in funds and creating rabbit holes. "Before you get to the POC we often trip over ourselves with understanding our data," said one federal government AI leader. "Rather than doing POCs, do a discovery sprint for AI and it will quickly unveil where the holes in your data are."

Sunil Karkera, Founder Soul of the Machine, is leveraging agentic AI to outpace much larger companies. "We solve boring problems and it's exciting," said Karkera. He also doesn't believe in proofs of concepts and pilots. Prototypes can be created in that first customer meeting and can rapidly go to production. "We are using an entirely end to end AI toolchain," said Karkera. "Vibe coding is about 10% to 20% in the prototyping phase. Then it's basically deep architecture. Engineering AI is really hard because most of the work is context engineering and it's not straightforward."

Are chief AI officers a thing?

Take a room with a few chief AI officers and ask them whether there's staying power in their titles and you're likely to get some interesting answers.

The takeaways from a panel:

  • The chief AI officer role is needed now but will be structured into the organization.
  • CAIOs revolve around a centralized approach, but will go away once AI is decentralized across an org.
  • CIOs will need to work with CAIOs for the foreseeable future on frameworks, tools, platforms and governance.
  • Enterprises with CAIOs need to balance business acumen and technical proficiency.
  • It's not a vanity title...yet.

Build vs. Buy

When it came to building AI applications, CxOs at the AI Forum were split on build vs. buy. AD Al-Ghourabi, a senior technology leader, said the right answer is to build and buy. "Buy for parity and build for differentiation," he said. "A lot of AI capabilities and LLMs are now commodities, but anything between your data and a decision is your differentiator and core."

In recent years, buying from a big vendor offered more predictability over build and best of breed. AI has changed that equation. You can prototype, test, build and deliver in the time a large tech vendor goes through the procurement cycle.

Others argue that enterprises should buy and push their vendors to innovate. There's a huge gap between prototype and production.

Tracey Cesen, Founder & CEO of Forever Human.ai, said the problem with building is that "software is a living, breathing organism so it requires care and feeding." That care and feeding also means you continuously question whether it should have been built.

In the end, the buy vs. build debate boils down to flexibility. Don't get caught into an "ERP data prison," one delivery mode with cloud or an AI vendor. In the end, enterprise buyers need to take a portfolio approach and acquire components that enable you to be flexible and experiment. Also tier vendors based on approaches and business priorities.

Nicolai Wadstrom, Partner, Co-Head of Ares Venture Capital and AI Innovation Groups at Ares Management, said the buy vs. build debate really revolves around do both.

"Build vs buy is wrong because you need to build, partner and buy," said Wadstrom. "You want to buy things commoditized. You want to build things where you can pour in proprietary knowledge and build a moat. And when you need higher skills you partner. The complacent thinking of a traditional CIO approach where I buy someone else's technology roadmap is over. You're not going to be competitive. Understand the drivers and competitors, define your problem, opportunity landscape and drive a technology roadmap."

On-premises still matters

CxOs noted that the conversation around AI often includes an assumption of cloud computing. The reality is that on-premises may drive more returns as inference becomes the main AI workload.

"Don't discard on-premises. A lot of people are doing AI on-premises and focused on it," said one CxO.

Why you shouldn't discount on-premises AI:

  • It's more effective for small models.
  • Cloud costs can add up.
  • On-premises AI may make more sense in terms of operations, privacy and security.
  • Edge use cases are likely to become more of the AI landscape.
  • There's a continuum of AI deployment models.

Future of work

The future of work was hotly debated. Most CxOs agreed that corporations will use fewer employees and more AI agents and robots. The impact on education, income and society will be large.

A few moving parts to ponder:

  • New management structures will need to emerge to manage humans and digital workers.
  • Governments aren't addressing the AI impact to jobs but will need to shortly--like in the next 12 to 18 months.
  • Education will have to evolve and public institutions aren't prepared. Look for personalized education programs to emerge powered by AI.
  • Professional training will become more important than university education.
  • Some attendees noted that humans have always found new roles amid new technology trends.
  • If AI uplevels the workforce you’ll see two side effects: First, everyone will move to the median in terms of performance. Second, you’ll have a shortage of experts since few workers will actually have the 10,000 hours required to be an expert.

Looming questions

Between the conversations in between panels, there were a few looming questions to ponder. These are worth some rumination.

  • Have LLMs hit the wall? In a few talks, it was noted that LLMs have already ingested all of the human data available and public. Synthetic data leads to degradation over time as it becomes further removed from the original.
  • Are we in an AI bubble? We covered this one before, but the worries aren’t going anywhere. See: Enterprise AI: It's all about the proprietary data | Watercooler debate: Are we in an AI bubble?
  • Is our brute force compute approach misguided? In the US, the running assumption is that trillions of dollars, data centers the size of Manhattan and millions of GPUs are the way to get to AI nirvana. However, there has to be more elegant engineering and innovation out there. Are we mired in AI factory groupthink?

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Millennial Samurai, AI Futures, and Why Culture Still Wins | DisrupTV Ep. 413

Millennial Samurai, AI Futures, and Why Culture Still Wins | DisrupTV Ep. 413

This week on DisrupTV, we caught up with visionary leaders shaping the future:

  • George J. Chanos, Author, Speaker, and former Attorney General of Nevada
  • Brian Vellmure, Executive, Builder, Advisor, Board Member, and Investor
  • Laura Hamill, PhD, author of The Power of Culture: An Economist Edge Book

In this episode of DisrupTV, we explore the forces shaping our future — from personal empowerment and the “Millennial Samurai” mindset, to AI’s disruptive impact on labor, energy, and business models, to the critical role of culture in organizations. Our guests share visionary perspectives on where humanity is headed, the choices leaders must make, and why culture remains a defining factor for success in the 21st century.

Key Takeaways

From the discussion, here are the top actionable insights:

  • Adapting to Rapid Change: George Chanos emphasizes embracing uncertainty, learning from failure, and finding opportunity in adversity to navigate a fast-changing world.
  • AI and Labor Markets: Brian Vellmure explores how AI will reshape labor dynamics, potentially creating winner-takes-all scenarios. Organizations need to allocate resources strategically to remain competitive.
  • Intentional Culture: Laura Hamill highlights the gap between stated and actual culture, urging organizations to explicitly define values, behaviors, and expectations to create alignment and autonomy.
  • Energy and Investment: The episode also touches on investing in energy and AI sectors to address the constraints of computing power and sustainable growth.
  • Personal Empowerment: Chanos shares lessons from his career, including arguing before the U.S. Supreme Court, and emphasizes emotional intelligence and unity as critical to overcoming existential threats.
  • Future-Focused Strategies: Guests discuss tokenization, hybrid work, and the evolving enterprise software landscape, highlighting the need for adaptability and deliberate strategy.

Final Thoughts

The episode underscores that organizational culture, AI adaptation, and personal empowerment are inseparable pillars of success in the modern enterprise. Leaders must intentionally define cultural expectations, anticipate AI’s impact on labor and markets, and cultivate emotional intelligence to drive sustainable outcomes.

By embracing the Millennial Samurai mindset—strategic, adaptable, and values-driven—individuals and organizations can not only survive but thrive in a rapidly evolving technological landscape.

Related Episodes

If you found Episode 413 valuable, here are a few others that align in theme or extend similar conversations:

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Analytics Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief Information Security Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/xI5mXyPyBLE?si=Ua_EDF4UdozEl1KX" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

OpenAI's SaaSageddon fears need perspective

OpenAI is eyeing software as a service as it aims to build applications around ChatGPT and has to be a software-as-a-service disruptor to justify at least some part of its valuation. Enterprises should take note of OpenAI's potential role, but keep perspective.

In a blog post on Monday, OpenAI's Giancarlo Lionetti outlined how OpenAI is running on OpenAI. Anyone familiar with enterprise software knows this vendor-running-on-itself marketing pitch. The general theme is that a software vendor is its own first customer. Salesforce will highlight how Agentforce is running the company. ServiceNow has been doing ServiceNow on ServiceNow for years. Pick a vendor and there's some version of the you are your first customer narrative going on.

So OpenAI's move to look a bit more SaaS-y isn't surprising. The LLM players have been working on apps to surround quickly commoditizing foundational models for most of 2025. What's changed is Wall Street has noticed and took DocuSign and HubSpot out to the woodshed this week.

Here's what OpenAI's Lionetti outlined in a post:

  • OpenAI has been able to move from pilot to production and feels your pain. "While our models improve in speed, cost, and capability, adoption rarely moves in a straight line. Deployments often outpace the change needed for organizations to leverage this technology," he said.
  • "Our GTM, product, and engineering teams study their everyday workflows, define what good looks like and deliver changes in weeks instead of quarters. We decided to focus on a few high-leverage systems with outsized impact," said Lionetti.
  • OpenAI highlighted GTM Assistant, DocuGPT, Research Assistant, Support Agent and Inbound Sales Assistant. These were viewed as Salesforce, Box, HubSpot, DocuSign and possibly ServiceNow killers. These tools aren't that different than what has been discussed in enterprise technology for months. In fact, many SaaS vendors have these ChatGPT-ish features already.
  • Enterprises will hear more about OpenAI's SaaS adventures on Oct. 6 and its developer day.

What was more notable in OpenAI's SaaS ambitions is that it has been gaining compliance certifications that enterprises will actually care about. OpenAI isn't competing with SaaS vendors for LLM interfaces as much as it is for compliant workflows and trust.

Some perspective

Remember history. Not that long ago in the 1990s and early 2000s, some big whale entered a market and it was widely assumed it would be successful. Who remembers SAP talking about chasing smaller enterprises? How about Microsoft and mobile?

Microsoft may be the best example. Every time Microsoft entered a market there was a storyline that the software giant was going to kill some vendor. The reality is that those smaller vendors survived and thrived most of the time.

Microsoft didn't kill Google and Android, Apple, MacOS and iOS, Amazon Web Services, Sony, Adobe, Oracle, Salesforce, SAP, Linux, Zoom, VMware or even IBM. I could go on but you get the idea.

OpenAI is an enterprise whale in valuation only. The SaaS vendors it will allegedly killing have more than just a fancy enterprise search tool. Many SaaS vendors serve as de facto workflow engines.

Let's roll a few slides:

DocuSign manages the agreement lifecycle. See: Docusign launches AI contract agents

Box is on the content lifecycle. See: Box launches Box Extract, Box Automate, Box Shield Pro

HubSpot is disruptive in its own right as it moves upmarket. See: HubSpot’s strategy: Use AI to deliver work, not software

It's quite possible you'll rip out your SaaS vendor for OpenAI, but not sure why you'd lock in without a few more layers on the platform slide. You'd be better off looking at a disruptive force like Soul of the Machine that'll build you something or ensuring that you're model agnostic. OpenAI could be an ingredient brand for SaaS vendors, but it's just one ingredient of a broader multi-model mix. 

The debate isn't about why you'd leave one vendor to lock in with OpenAI. The debate is why you couldn't just build what OpenAI has done internally with a cheaper model.

Your play

Whether you believe OpenAI is your SaaS savior is up to you. But one thing is clear: You can use it.

OpenAI has given you an option for negotiations. OpenAI's timing with its SaaS play is pretty good. Why? Customers are beyond annoyed with their SaaS vendors. We hear it from CxOs all the time.

R "Ray" Wang, CEO of Constellation Research, frequently notes how CxOs tell him all the time that the two most inflationary things enterprises see are healthcare costs and their SaaS bill.

Here's how you can use this OpenAI kerfuffle to your advantage.

  • Threaten to "explore" using OpenAI as a layer to your enterprise operations. You can always use a second supplier.
  • Actually explore what OpenAI outlines in its operations and build it yourself with Anthropic, Cohere or any other LLM provider.
  • Evaluate your SaaS strategy as if you had a clean slate. Would you really bet on these SaaS silos--i.e. wannabe platforms--today?
  • Map your platform strategy. A member of Constellation Research's BT150 recently said that when you have a green field a vendor like ServiceNow is the choice over a series of SaaS vendors.
  • Evaluate your hyperscaler. In the end, the real impact of LLMs is going to be as a user interface and vehicle to access your data, processes and workflows quickly. AWS, Google Cloud and Microsoft Azure are all in the mix for building agents and models that traverse enterprise apps.

In the end, OpenAI hasn't even earned the benefit of delivering FUD because it hasn't done anything yet. Nevertheless, OpenAI just gave you a bit of leverage. Use it.

Data to Decisions Future of Work Chief Information Officer

Abrigo: How a ‘lift and shine’ migration to AWS set software vendor up for AI

When Abrigo, which provides compliance, credit risk, and lending software for financial institutions, launched a series of new capabilities to its Abrigo AI suite in September this year, the effort was more than a product launch. The additions were the payoff in product velocity as a result of a broader cloud and data transformation.

For Ravi Nemalikanti, chief product and technology officer at Abrigo, the launch of AskAbrigo, Abrigo Lending Assistant, Abrigo’s anti–money laundering assistant, and Abrigo Allowance Narrative Generator highlighted how Abrigo could move much faster than it could two years ago.

“There’s no way we could have been in a position to launch those products back in our previous data center–centric world,” Nemalikanti said. “It was definitely a transformation around cloud, data, customer experiences, and resilience that made this possible.”

Abrigo’s new artificial intelligence (AI) products went from concept to production in six months, three times faster than the process previously would have taken.

Download this article as PDF

And the stakes are high. Nemalikanti said Abrigo is a critical software provider to banks with less than $20 billion in assets as well as credit unions. “If you’re looking at a $15 billion bank or a $1 billion credit union, they don’t have a way to stay ahead of what’s happening or even keep pace,” Nemalikanti explained. “They look to us as an innovation partner. We needed to transform ourselves to be able to help power the transformation for our banks and credit unions.”

In keeping with the trajectory of enterprises such as Intuit and Rocket, taking advantage of the latest in AI required Abrigo to complete foundational steps in the years prior. First, there’s a move to the cloud. Then there’s the data transformation. And if those two foundational elements are lined up, adopting AI at scale is more feasible.

Abrigo decided to move to the cloud in 2022 and then held a bake-off between the big three hyperscalers. Abrigo, a Microsoft shop, decided to go with Amazon Web Services (AWS), in part because the company didn’t want to be tied to software licenses and wanted to use open source technologies, Nemalikanti said. Abrigo, which caters to a heavily regulated industry, has noted that AWS’ approach to building security into development and deployment processes was also a big factor.

Nemalikanti noted that the move to AWS was partly about cost savings, but the real win was velocity and the cultural transformation involved with operating in the cloud. He said that previously product teams would develop software and throw it over to the data center ops team. With the cloud, the approach to software development is more holistic. “Shifting to the SRE [site reliability engineering] mindset across the organization was critical to cultural change,” Nemalikanti said. “Now, if you build it, you own it and run it.”

Using AWS partner Cornerstone Consulting Group, Abrigo moved 100% of its workloads to AWS in 13 months.

‘Lift and Shine’

Speaking at AWS re:Invent 2024, Abrigo led a session walking through its cloud transformation. The pre-AWS environment was built around colocated data centers that came with $7.5 million a year in capital costs.

Here’s a look at Abrigo’s pre-AWS environment:

  • All software-as-a-service (SaaS) servers were hosted out of two geographically diverse colocated data centers. One data center served as primary, for disaster recovery, and another for internal development.
  • Abrigo had about 1,500 virtual servers with 5PB of storage. About 90% of Abrigo’s infrastructure was built on Microsoft’s stack including Windows, SQL Server Standard and Enterprise, IIS App Server, .NET Framework, and .NET Core.
  • The vendor had more than 50 unique hosted SaaS applications.

Jason Perlewitz, VP of Cloud Operations at Abrigo, said the company was looking to migrate to AWS quickly so it could innovate faster with AI in the future. Speaking at AWS re:Invent 2024, Perlewitz said the goal was to create a foundation for infrastructure, product, and database modernization at lower costs.

Source: Abrigo/AWS

The challenge was delivering the cloud migration in fewer than 16 months when dealing with 50 unique applications, a lack of data hygiene, tech debt, and strict downtime requirements to minimize customer impact.

“We thought in the long run we could save money by operating in the cloud,” Perlewitz said. “Our infrastructure costs reductions we wanted to be at least 20%, and we thought more than that was possible once we started to operate efficiently. We also wanted to tie our cloud spend to the growth of our business.”

Other goals for the cloud migration included:

  • Reducing incident resolution time by at least 20%
  • Reducing product deployment time by 25%, with a 30% increase in deployment frequency
  • Planning ahead for enduring impact. Abrigo spent the first three months setting up architecture, defining data-tagging strategy, and upskilling teams.

“We wanted to free up our smart people to do smart things. We want to innovate,” Perlewitz said. “That’s where we get value. We want to see time spent on growth activities.”

To meet those cloud migration priorities, Perlewitz said, Abrigo deployed AWS Professional Services to build fit-for-purpose landing zones and security architecture and invested in training.

Overall, Perlewitz said Abrigo didn’t want to simply migrate but wanted instead to take a “lift and shine” approach that included copying existing virtual machines with the AWS Application Migration Service (MGN), making small changes with outsized benefits, and cutting unnecessary environments and data. AWS Managed Services was used for additional operational support.

Abrigo said lift and shine included the following moves:

  • Consolidating Windows versions before migration
  • Eliminating environments and data that wasn’t needed
  • Syncing data stores
  • Standardizing engineering tasks
  • Consolidating disaster recovery instances

Perlewitz said training was a big part of the migration mix. “We wanted to equip our teams to be functionally literate in the cloud and improve our own internal capabilities,” he said. “We want to innovate and adopt new technologies more quickly.”

Abrigo hit its goals for the migration and then some. Here’s a look (all figures compared with the year prior):

  • The migration was completed in 13 months—three months ahead of schedule.
  • Mean time to recover has decreased 63%.
  • Customer instance incidents have fallen 72%.
  • Cost of infrastructure operates at 3.65% of Abrigo’s recurring revenue, down from 5% when the company operated its own data centers.
  • Application performance improved 15% to 30% on average.
  • Time to market for Abrigo’s cloud applications is 70% faster than before.
  • Technical debt was reduced by 50%.

Source: Abrigo/AWS

Ongoing Optimization

Phil Schoon, senior software architect at Abrigo, said the cloud migration provided many more options for application development as well as optimization challenges.

Schoon said Abrigo developers were excited about the various services from AWS that were now at their disposal. The catch is that those services can add up. “It’s very easy to move a monolithic architecture and deploy it, but as it grows it starts to get expensive,” Schoon said.

For starters, Schoon explained, Abrigo prioritized working on areas that weren’t directly tied to features that affected customers. In addition, Abrigo’s team needed to figure out how to use AWS services and then get better at using them.

Schoon said a big focus for Abrigo is container efficiency, where applications were simplified with partner Cornerstone and AWS.

Nayan Karumuri, senior solutions architect at AWS, said at re:Invent that it’s common for customers to need to optimize after a migration. “The initial challenge is that there’s a bubble cost in the beginning, and that’s mainly due to resource inefficiencies,” Karumuri said. “When you’re looking at 1,500 applications migrating to the cloud, some instances were over-provisioned to avoid performance degradations and provide a good user experience.”

Karumuri said Abrigo switched to autoscaling instances and reserved capacity models. The ability to right-size services also required a learning curve.

Here’s a look at some of the optimization changes:

  • .NET applications were moved from Windows to Linux environments.
  • Red Hat Enterprise Linux was transitioned to Amazon Linux for native integration with cloud-native services and the ability to use spot instances wherever possible.
  • Compute instances were right-sized, with more instances moved to AWS’ custom Graviton chip.
  • Amazon CloudWatch was used to monitor and trigger AWS Lambda functions.
  • AWS Cost Optimizer was also used to manage ongoing costs.
  • Abrigo moved commercial databases to AWS where possible.

Source: Abrigo/AWS

By the time Abrigo outlined the project at re:Invent, the company’s optimization efforts yielded the following:

  • $1 million in disaster recovery savings due to a reduced EC2 footprint
  • $1.3 million in savings from modernizing databases to Aurora PostgreSQL
  • 80% Babelfish development cost savings
  • $140,000 in cost savings from right-sizing EC2 instances
  • $250,000 in savings for optimizing storage
  • 30% processor performance uplift

That list isn’t everything, but it gave Abrigo a good base to move forward. Nemalikanti noted that the optimization continues on an ongoing basis.

What’s Next?

Nemalikanti (right) said everything from application performance (up 20% to 30% on average) to product release cadence and reporting has been sped up with AWS.

According to Nemalikanti, Abrigo’s AI strategy is to bring agentic AI features to customers and give them secure access to the latest models.

“Most of our customers don’t have access to multiple foundational models, and there’s some trepidation,” Nemalikanti said. “What we’ve done is extend the trust our customers have in us to AI.”

Abrigo is also looking to solve for the most critical use cases within smaller banks. For instance, AskAbrigo can pull from multiple policy documents to give tellers the ability to make decisions quickly on questions about cashing a check with a temporary ID or another issue. “We can show them the source so there are no hallucinations,” Nemalikanti said.

Using AWS, Abrigo has set customer banks up with their own instances and data store. As for the models, Abrigo picks a variety of models that are best for a specific use case, including Amazon Nova and Anthropic’s Claude. “Our AI strategy is simple: Take the five most critical things that matter to customers and launch solutions at a high velocity. We know where the productivity for our customers is lost, and we’re embedding AI in exactly those areas,” Nemalikanti said.

Nemalikanti’s team does thousands of interviews with customers each year, and those interviews will determine where Abrigo uses AI. He said Abrigo plans to leverage agentic AI, but that it doesn’t work for every use case—especially when there’s a deterministic workflow. “We do think there are real opportunities, but we’re just not going to follow the hype,” Nemalikanti said. “We will look at real business processes holistically, such as loan origination, documentation reviews, and underwriting.”

Data to Decisions Next-Generation Customer Experience Tech Optimization amazon Chief Executive Officer Chief Information Officer

Snowflake launches Cortex AI for Financial Services, MCP Server

Snowflake launched Snowflake Cortex AI for Financial Services, a suite designed to connect AI models to financial data, apps and data via model context protocol (MCP).

The move highlights how industries are increasingly building enterprise AI plans around proprietary and industry specific data.

Snowflake said the linchpin of the financial services offering is the company's new MCP Server, which connects data from the likes of MSCI, Nasdaq, AP and eVestment with agents built on Anthropic, CrewAI, Cursor, Cognition and Windsurf.

The company said Snowflake MCP Server is in public preview. Snowflake MCP Server can connect to platform tools such as Cortex Analyst and Cortex Search as well as third party external tools and data.

Snowflake said MCP Server can connect to Anthropic, Augment Code, Amazon Bedrock AgentCore, CrewAI, Cursor, Devin by Cognition, Glean, Mistral, UiPath, Windsurf, Workday, and Writer.

The data platform has been focusing on financial services data for years. In 2021, BlackRock and Snowflake partnered on Aladdin Data Cloud.

Key points about Cortex AI for Financial Services:

  • Cortex AI for Financial Services connects a bevy of data sources via Cortex Knowlege Extensions to round out market analysis, research, business content and news.
  • Machine learning workflows for risk modeling, forecasting, analytics and compliance are available in Cortex AI for Financial Services. Snowflake Data Science Agent can clean data, engineer feature and prototype and validate models.
  • Unstructured data analysis is available via Cortex AISQL to pull insights from documents and images.
Data to Decisions snowflake Chief Financial Officer Chief Information Officer

Conga will buy PROS B2B unit

Conga said it will acquire the B2B business of PROS Holdings, which is being acquired by Thoma Bravo. Congo is also a Thoma Bravo portfolio company.

Thoma Bravo said last month it will buy PROS for $1.4 billion.

The move for Conga to acquire PROS B2B business resolves one of the big questions of the deal. Thoma Bravo owns Conga and PROS and the two companies compete in certain areas. Terms of the PROS B2B deal weren't disclosed.

When Thoma Bravo closes the PROS deal the B2B business will move to Conga. The acquisitions should be complete in the first quarter of 2026. With the combination of PROS B2B and Conga, the plan is to offer a complete suite for revenue management and configure, price, quote (CPQ) as well as contract lifecycle management.

Conga CEO Dave Osborne said the addition of PROS B2B will mean enterprise won't have to "stitch together multiple point solutions across their revenue lifecycle." Osborne will remain as CEO of Conga after the deal closes.

The combined company plans to use AI to automate pricing, quoting and contracting to optimize revenue processes, drive insights and execute post quote.

 

 

Data to Decisions Marketing Transformation Next-Generation Customer Experience Chief Revenue Officer

UiPath adds agentic AI features to automation platform, expands partnerships

UiPath expanded its UiPath Platform, which is aimed at agentic AI automation and orchestration, and lined up a bevy of partners including OpenAI, Google, Microsoft, Nvidia and Snowflake as it solidified its integration strategy.

The moves by UiPath highlight how AI agents, process automation and automation are starting to converge.

At UiPath's Fusion conference, the company outlined a series of additions to its platform. UiPath announced the following:

  • UiPath Maestro Case Management with pre-built orchestration for claims, loans and disputes for modeling, management and optimization.
  • New UiPath Maestro Process Apps designed for new processes for multiple industries.
  • UiPath Solutions, which combines agents, workflow automation and orchestration. UiPath Solutions include end-to-end processes for financial services, healthcare, customer service and retail.
  • UiPath Studio gets UiPath Agents, which integrates agents across development. UiPath's AI Agent Builder has a new visual UI for debugging, optimization and reusable templates. UiPath's conversational agents extend into multiple collaboration apps.
  • The company also added new features to UiPath IXP document processing and additions to UiPath Test Cloud.

UiPath's big news revolved around its partnerships and integration with data platforms and models. UiPath said it will integrate its platform with OpenAI ChatGPT via a connector that integrates frontier models with workflows.

Key points about the UiPath-OpenAI partnership:

  • OpenAI models and APIs will be integrated into UiPath's enterprise orchestration tools.
  • The companies will create a benchmark for using models in agentic automation to evaluate multiple offerings.
  • UiPath Maestro will orchestrate UiPath, OpenAI and third party AI agents in business processes via large action models.
  • UiPath will be integrated with ChatGPT via model context protocol (MCP).

UiPath also said its Conversational Agent with voice interaction will be powered by Google Gemini models. The move puts Gemini into business processes without coding and manual efforts.

According to UiPath, customers will be able to leverage Google Cloud Vertex AI to trigger, build and manage automation through natural language.

The company also announced a partnership with Nvidia. Key details include:

  • UiPath will include Nvidia Nemotron models and Nvidia NIM into its platform and integrate via connectors.
  • The companies will look to broaden agent orchestration and usage for Nvidia Nemotron models.

The partnership with Snowflake will combine UiPath's automation platform with Snowflake Cortex AI. The combination puts together AI agent orchestration and UiPath Maestro with Snowflake's data platform.

Snowflake's Cortex Agents will be integrated into UiPath so enterprise can leverage data and build agents for workflows.

UiPath also announced a deal to integrate into Microsoft AI Foundry in a move that will bring its orchestration platform to Microsoft customers across multiple industries. Via MCP, UiPath agents will have bi-directional integrations with Microsoft Copilot and CoPilot studies and be able to interact with Microsoft agents and models.

 

 

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Soul of the Machine: AI proof of concepts a waste of time

Sunil Karkera, Founder Soul of the Machine, is leveraging agentic AI to outpace much larger companies. "We solve boring problems and it's exciting," said Karkera.

Soul of the Machine has migrated SAP in 90 days and implemented a voice-based LLM augmented factory and production planning system in days. Karkera's services are completely agentic based with engineers doing the work up front.

"Everybody is in the US and we are forward deployed engineers. We work directly with the customers. Engineers, strategists and designers are totally vertically integrated," said Karkera, speaking at Constellation Research’s AI Forum in Washington DC.

Karkera also said AI has flattened the services model. He also doesn't believe in proofs of concepts and pilots. Prototypes can be created in that first customer meeting and can rapidly go to production. "We are using an entirely end to end AI toolchain," said Karkera. "Vibe coding is about 10% to 20% in the prototyping phase. Then it's basically deep architecture. Engineering AI is really hard because most of the work is context engineering and it's not straightforward."

More from AI Forum Washington DC:

In other words, it's hard to keep it simple. Karkera said Soul of the Machine tries to avoid multi-agent orchestration to keep tools limited. "Once you use more than three tools, it goes all over the place. Ideally, it's one tool per agent," said Karkera. "If we do multi agent orchestration we do it handmade. There's no choice at this point."

According to Karkera, enterprises are going down the wrong route with proof of concepts.

"We have a rule that we don't do any POCs. We have left money on the table by saying no to POCs, because we want to embrace the problem and do it all the way, rather than explain how hard it is. One cultural thing is to go after a full problem, segment that problem, solve it all the way and put it in production. Don't dwell on fancy problems to solve instead of the real problems with ROI."

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Google DeepMind’s Danenberg on emerging LLM trends to watch

Peter Danenberg is a senior software engineer at Google's DeepMind, leads rapid prototyping for Gemini and has to think through more than a few big ideas.

Speaking at Constellation Research's AI Forum, Danenberg spoke with R "Ray" Wang about emerging trends in AI and looming questions ahead. Here's a look at the high level topics in a space that evolves almost hourly.

More from AI Forum DC: For AI agents to work, focus on business outcomes, ROI not technology

Ambient LLMs. To use LLMs today, you break out your phone or laptop and often break your flow. The future could be an ambient companion that sits there and sees what you see and hear. Danenberg said he wasn't sure where he sits on the ambient LLM spectrum, noting that it could be creepy, but there are advantages to an assistant that wouldn't break your creative flow. "It's an interesting question," he said. "There's an idea of a companion that's there and you're not aware of it until you need it."

Use cases. Danenberg said that there has been a shift in companies about how they are using foundational models from reluctance to adoption. Companies are focusing on low hanging fruit for use cases, but these add up. "Anything where you need to extract structured data from unstructured data is beautiful low hanging fruit you can get started with," he said.

Constellations of smaller models emerge. Danenberg said one trend to note is that there are startups focused on smaller models that do one thing well and then become parts of constellations of LLMs that solve problems.

Don't forget the classics. Danenberg said there's a renaissance in thinking in AI that's "going back to class ML (machine learning." The trend is still developing, but researchers are rediscovering 1960s AI, symbolic reasoning and ontologies. In this world, "LLMs are just becoming a universal interface over small models and classic ML," said Danenberg. "I wonder if, to a certain extent, that the LLM sweet spot is really as a user interface of these classical models that can achieve something with 100% accuracy with its own specific event. That's going to be interesting idea."

The importance of 10,000 hours. The effect of LLMs on human intelligence is an ongoing debate and concern. Danenberg said one impact to ponder is the 10,000 hours rule. Humans put in 10,000 hours into something, gain domain knowledge and expertise and then develop a bullshit detector to distinguish between fact and fiction. "The big question is that in the age of LLMs are we still going to be able to put in the 10,000 hours to develop these reality detection systems?," said Danenberg. "Going forward, that's going to be an interesting question in terms of the generation coming of age."

Virality of Nana Banana, Google's AI image editor. Danenberg said the combination of Gemini 2.5 and Nana Banana led to a viral moment for Google that was unpredictable. "With this virality thing, you can't force it, but I am just glad we had a moment," said Danenberg.

Data to Decisions Chief Information Officer