Results

Supernova 2025: How Jeitto uses alternative data in Brazil for credit decisions

Supernova 2025: How Jeitto uses alternative data in Brazil for credit decisions

Alex Franco, Chief Risk Officer and Chief Technology Officer at Jeitto, artificial intelligence and alternative data can democratize access to credit at a faster pace.

Franco, a Supernova Award 2025 finalist, recently outlined Jeitto's mission and approach to technology. Jeitto primarily operates in Brazil and provides credit to more than 11 million customers. Many of those customers are underserved by banks.

Jeitto has automated decision workflows through its mobile app and AI-led processes. Since deploying Provenir’s AI Decisioning Platform, Jeitto has seen a 20% reduction in defaults on the Jeitto Loan, grown its customer base and recorded a 10% increase in its credit approval rate. Jeitto's credit assessment cycle time has moved from 1.5 minutes per application to 30 seconds.

We caught up with Franco to talk shop. Here's a look at the key themes.

Target market. Franco said Jeitto is focused on Brazil's underserved population "that earns low income, that has a lot of difficulty to get access to credit in Brazil."

Alternative data. Jeitto leverages AI and alternative data for credit decisions. Franco said: "Since 2014 Jeitto has been working on AI and alternative data to provide credit to the population in Brazil." Key data points include.

  • Cell phone data, a key factor in credit decisions.
  • Fiscal data retrieved through the customer's fiscal ID in Brazil
  • Customer behavior data.

Strategic expansion. Franco said Jeitto has 13 million consumers in its database and is now looking to add new services and loans to become a broader financial platform.

Technology strategy. Franco said Jeitto's stack revolves around combining best-in-class platforms to leverage AI and intelligence. Core functions are those technologies that affect the customer experience and long-term value.

Jeitto's platform connects fraud scores, identity checks and device validation, integrating multiple layers of fraud detection into decisioning workflows to mitigate threats at application screening, including synthetic fraud, impersonation and mule indicators. This eliminates siloed environments between credit and fraud risk teams, to ensure holistic, end-to-end decisioning with a complete view of customers across the entire lifecycle.

Data to Decisions Future of Work Next-Generation Customer Experience Chief Information Officer Chief Technology Officer

Microsoft 365 launches role-based Copilots

Microsoft 365 launches role-based Copilots

Microsoft announced a set of Copilots for sales, service and finance in a move that brings role-based assistants to Microsoft 365 Copilot.

The role-based Copilots will be available in preview for Microsoft 365 Copilot customers in October via the Copilot Agent Store.

Microsoft's offering is the latest in a trend of copilots and AI agents aimed at specific roles and processes.

With the Copilots for Sales, sellers can leverage AI within their usual productivity tools. Ditto for service pros and finance teams. Directions on Microsoft analyst Mary Jo Foley noted that the role-based Copilots are part of a rebrand.

In a blog post, Microsoft pitched "Frontier Firms" that put AI at the center of customers experience, productivity and processes.

Constellation Research CEO R “Ray” Wang argued in a research report that enterprises need to rethink old models in enable AI and shed tech debt. Wang also recently examined AI exponentials and their potential.

For these role-based Copilots to work, Microsoft is connecting them to outside systems. For sales, Copilot connects to Microsoft's Dynamics 365 as well as Salesforce and other CRM systems. Finance connects to Dynamics 365 as well as SAP and other ERP systems.

Constellation Research analyst Holger Mueller said:

"For the first wave of agents to keep a role focus needs to be on a valid True North. And efficiency gains from agents will benefit any role. The prize for enterprises though is not in efficiency - but effectiveness: Doing the right thing is the key design challenge in the agentic era. Agents are not bound to human roles and unleashing the next level of enterprise effectiveness is really the game enterprises must play and win. Role based agents can only be the start. Otherwise role-based agents can further cement up the 'efficiency trap' (which is missing the effectiveness exit)."

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Hubspot Innovation, Graph Databases, Fall Conferences | ConstellationTV Episode 113

Hubspot Innovation, Graph Databases, Fall Conferences | ConstellationTV Episode 113

This week on ConstellationTV episode 113,  co-hosts Liz Miller and Holger Mueller kick things off with the latest enterprise technology news from SAP, HPE, and HubSpot, diving into how #AI is driving real business value across sectors—from marketing to customer service and beyond.

Next, guest analyst Mike Ni joined Holger Mueller live from New York at the Neo4J GraphSummit 2025 to unpack the growing importance of graph databases in AI and decision automation. We discussed new innovations, scalability breakthroughs, and real-world use cases that are shaping the future of explainable AI.

It’s that time of year! Holger & Liz wrap up by previewing the whirlwind of fall #tech events, analyst summits, and conferences—sharing our roadshow plans and what we expect to hear (beyond just “AI, AI, AI!”).

Catch the full episode for deep dives, expert insights, and a few laughs along the way! 

00:00 - Introduction
00:06 - Enterprise Technology News
13:05 - LIVE from GraphSummit2025
20:00 - Silly Season Update

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Product Officer Chief Revenue Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/54ws-0LdFeA?si=-zuMZYzOfrHfVEYr" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

ServiceNow launches Zurich release, inserts process, task mining into AI agent workflows

ServiceNow launches Zurich release, inserts process, task mining into AI agent workflows

ServiceNow launched its Zurich release of its platform with tools to build AI apps and agents more easily, attach identities to digital workers and integrate process and task mining into agentic workflows.

With ServiceNow's previous releases--Yokohama and Xanadu--the company began rolling out AI agents throughout its platform. The Zurich release is aimed at providing more agentic AI tools to developers and scaling agents automation.

In addition, ServiceNow is bringing process mining and task mining to that agentic AI pipeline. As previously noted, process is often overlooked in agentic AI plans and that's a strategic mistake for enterprises.

ServiceNow acquired UltimateSuite in late 2023 to build out its native process and task mining capabilities.

Kush Panchbhai, Senior Vice President of AI Platform, Zurich is an effort to give customers the tools to "pivot from legacy automations to proactive automations across all of the workspace" and structured and unstructured data.

Process optimization is also critical. Zurich will include task mining to observe what humans are actually doing in an enterprise. Combined with process mining, Panchbhai said Zurich brings the ability to optimize "how work is flowing in an enterprise."

The ability to identify inefficiencies via process and task mining is critical because those insights "are then food for our AI agents" to automate with streamlined processes, he said.

"You want to make sure that all the inefficiencies the customers have today can be solved by agentic scenarios. That's why we are launching agentic playbooks," said Panchbhai.

To Panchbhai, process mining is "step zero" to building AI agents. ServiceNow has more than 20 years’ worth of workflow and process automation data that developers can use to create adaptive AI agents. He said:

"When you use these mining capabilities, you essentially find lot of bottlenecks and it can take days to get from one step to another step. Most of the time, process mining doesn't reveal an optimized map. It's like spaghetti when we run process mining across different use cases. We analyzed all that spaghetti and returned recommendations so we can give you tools to fix inefficiencies with a click of a button. That's why I think about this as step zero in building an agent."

Developer tools

Jithin Bhasker, Group Vice President and GM of Creator Workflows and App Engine at ServiceNow, said the Zurich additions recognize that "in the next two years, a third of every application will be refactored and rewritten to be able to support data and AI readiness."

Bhasker said that once applications are revamped for agentic AI workflows will be optimized and tweaked using no code and low code tools. "This will effectively drive a massive amount of customer AI applications and agents being built," said Bhasker, who said software will be built with natural language and simple prompts.

Add it up and ServiceNow is aiming to be the platform that is that tweener between building and buying enterprise software. You buy the platform and then build AI applications and agents with governance, integration and security.

The vision for ServiceNow rhymes with the strategy for Salesforce and a bevy of other SaaS players. Give every employee a digital coworker to boost productivity.

Key enhancements to Zurich include:

  • Build Agent, which includes conversational development tools so business experts can built apps with generative AI and sandboxes. "It's all about how you automate and accelerate the entire lifecycle from idea to an app in a minute," said Bhasker.
  • Built-in process mining. Bhasker said process and task mining is baked into Build Agent to recommend process flows and automatically provide optimization insights.
  • Developer Sandbox, a dedicated environment for agentic app development.
  • Adaptive Agents, which are self-optimizing applications to deliver outcomes.
  • Machine Identity Console with governance enhancements via AI to ensure compliance.

Amanda Grady, Vice President and GM of AI Platform Security, said the upcoming proliferation of AI agents will require a machine identity console.

Grady said AI agents are a new type of identity more akin to a service manager. The Machine Identity Console in Zurich will assign identities to AI agents and identify high-risk integrations and security improvements.

The Machine Identity Console includes the following:

  • Centralized visibility of all inbound API integrations.
  • The ability to identify high-risk service account identities with preventative actions.
  • Improved security with recommendations with clear steps.

Grady said ServiceNow is also adding features to Vault Console to know, protect and monitor data. ServiceNow Vault includes a guided experience for sensitive data auto-classification and protection, streamlined security tools and audit and compliance management.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity servicenow ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Hitachi Digital Services launches HARC Agents, AI agent library, management system

Hitachi Digital Services launches HARC Agents, AI agent library, management system

Hitachi Digital Services launched a library of more than 200 pre-built AI agents across industries and use cases as well as an Agent Management System that aims to provide a single pane of glass to manage multiple agentic AI platforms.

The new library and agent management system landed as Hitachi Digital Services unveiled Hitachi Application Reliability Center (HARC) Agents.

Hitachi Digital Services' launch highlights a few notable trends:

For Hitachi Digital Services, the launch is a follow-up to its May analyst day where it broadly outlined its strategy, customer use cases and its AI platform and software. Hitachi Digital Services added its Agent Library and Agent Management System to its HARC Agents platform, which also includes R202.a, a framework for defining and developing enterprise AI deployments, and HARC for AI, a set of professional and managed services to operationalize AI systems.

More on Hitachi Digital Services:

According to Hitachi Digital Services, the HARC Agents stack can reduce time to value for AI agents by 30%. Roger Lvin, CEO of Hitachi Digital Services, said the company is trying to address a key pain point for enterprises. "Too many technology partners are content to run pilots, chase headlines, and talk theory," said Lvin, noting that enterprises need "operationalized AI" that drive returns.

Hitachi Digital Services said its Agent Library is focused on industrial AI with vertical-specific use cases and AI for operations, engineering, analytics, security and cloud. The company added that it will add new agents continuously.

The Agent Management System (AMS) from Hitachi Digital Services is another tool worth watching. With AMS, Hitachi Digital Services is looking to manage various agentic platforms such as Microsoft Azure Copilot Studio, Google Cloud's Agentspace, Ema, Lyzr and others holistically.

Bottom line: Enterprises are being pitched multiple AI agent platforms, but are likely going to look for one neutral point to operate what's going to be a sprawling heterogenous agentic AI landscape.

 

 

Data to Decisions Future of Work Chief Information Officer

Arm launches Lumex, an AI platform for devices with data center implications

Arm launches Lumex, an AI platform for devices with data center implications

Arm launches Lumex, its latest platform for on-device AI and a big bet that inference will move from the cloud to the edge in hybrid deployments.

Lumex is designed for premium mobile devices, but will also play into AI data center workloads. The more AI inferencing can be moved to the devices in your pocket, the less enterprises and developers will have to invest in cloud compute and AI infrastructure.

Chris Bergey, VP and GM of Arm's client business, said advances in large language models (LLMs) and agentic AI are making mobile devices more of a companion with high expectations for experiences.

"We have moved from Ai being a parlor trick to influencing how things get done. People of all ages are using these experiences every day, embedded seamlessly into apps, devices and systems they rely on, but we have only started to see how AI will shape our future expectations," said Bergey in a briefing. Bergey also outlined Lumex in a blog post.

He added that AI is "too essential, too interactive and too valuable" to be derailed by network glitches. When instant response is the expectations, local compute matters. "AI has to move to the device. Why? Because relying on Cloud to scale isn't sustainable. It's too expensive for developers and too slow for users and too concerning for privacy," said Bergey.

Arm is targeting smartphones and consumer devices with Lumex, but remember Arm has broad data center ambitions. Bergey added that the most advanced models will be in the cloud, but workloads that can move down to the device.

Indeed, Geraint North, Fellow of AI and Developer Platforms at Arm, said AI costs are going to matter. North said:

"One of the things with developers is that everyone is in a user acquisition phase right now before they're in the 'we've got to make this profitable' phase. Developers are going to say, 'I can't just spend all this on cloud compute resource' and ask how much they can offload? (Offloading compute) will become increasingly important once people are under pressure for profitability, which is not necessarily the case for many of the AI app developers right now."

The evolution will be that small models will run on device and advanced models will stay in the cloud. Enterprises and developers will optimize for performance and costs.

Lumex tech details

Lumex represents Arm's platform strategy with an "AI-first platform design" that's built from the ground up for AI. Lumex features better performance, tighter integration and a more scalable architecture.

"We're talking about tightly integrated compute, software and tools optimized for the next generation of mobile workloads, and it's built for AI with new architectural features and optimized implementations for the best performance," said James McNiven, who leads Arm's product management team for the company's client business.

The Lumex platform also gives Arm the ability to integrate the company's technology quickly.

Key items:

  • Lumex has SME2 integration throughout. SME2 (Scalable Matrix Extension Version 2) is a hardware feature and extension of the Arm v9-A architecture that accelerates advanced operations for AI inference, HPC and other intense workloads.
  • Arm's C1 Ultra CPU provides a 25% performance gain and its C1 Pro, which is optimized for efficiency, has a 12% improvement in energy usage.
  • The Mali Ultra GPU delivers a 20% gain in performance with 9% better energy efficiency as well as 20% faster AI inferencing.
  • SME2 integration provides 5x acceleration in AI performance and 3x efficiency for AI experiences.
  • Lumex is optimized for 3nm process technologies and can be manufactured in multiple foundries.

Data to Decisions Future of Work Next-Generation Customer Experience Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Oracle Q1 misses, but sees OCI revenue surging over next 4 years

Oracle Q1 misses, but sees OCI revenue surging over next 4 years

Oracle's first quarter earnings and revenue fell short of expectations, but remaining performance obligations growth of 359% overshadowed the results.

The company delivered first quarter earnings of $1.01 a share on revenue of $14.9 billion, up 12% from a year ago. Non-GAAP earnings in the quarter were $1.47 a share.

Wall Street was expecting Oracle to report first quarter non-GAAP earnings of $1.48 a share on revenue of $15.04 billion.

Oracle said cloud revenue overall was $7.2 billion, up 28% from a year ago. Oracle's infrastructure as a service business was $3.3 billion in the first quarter, up 55% from a year ago. Cloud application revenue (SaaS) was $3.8 billion, up 11% from a year ago.

Although the most recent quarter was mixed, Oracle's future demand looks strong. Oracle said remaining performance obligations (RPO) in the first quarter was $455 billion, up 359% from a year ago. In its SEC filing for the quarter Oracle said:

"Remaining performance obligations were $455.3 billion as of August 31, 2025, of which we expect to recognize approximately 10% as revenues over the next twelve months, 25% over the subsequent month 13 to month 36, 34% over the subsequent month 37 to month 60 and the remainder thereafter."

CEO Safra Catz said Oracle signed "four multi-billion-dollar contracts with three different customers in Q1." She added:

"It was an astonishing quarter—and demand for Oracle Cloud Infrastructure continues to build. Over the next few months, we expect to sign-up several additional multi-billion-dollar customers and RPO is likely to exceed half-a-trillion dollars. The scale of our recent RPO growth enables us to make a large upward revision to the Cloud Infrastructure portion of Oracle's overall financial plan which we will be presenting in detail next month at the Financial Analyst Meeting."

Catz said Oracle is expecting Oracle Cloud Infrastructure revenue to be $18 billion this fiscal year, up 77%, and then increase to $32 billion, $73 billion, $114 billion and $144 billion over the next four years. "Most of the revenue in this 5-year forecast is already booked in our reported RPO," said Catz.

Today, Oracle's cloud revenue run rate is pushing $29 billion. For comparison, AWS has an annual revenue run rate of $124 billion compared to $50 billion for Google Cloud and $75 billion in annual sales for Microsoft Azure.

CTO Larry Ellison said multicloud database revenue from Amazon, Google and Microsoft grew 1,529% in the first quarter compared to a year ago. "We expect multicloud revenue to grow substantially every quarter for several years as we deliver another 37 datacenters to our three hyperscaler partners, for a total of 71," said Ellison.

Ellison said the company will introduce the following at Oracle AI World: Oracle AI Database, which will allow customers to use any LLM including Google Gemini, OpenAI ChatGPT, xAI's Grok and others--on top of Oracle Database.

While Oracle's future demand seems secure, it's worth noting that the company's free cash flow is taking a hit as it spends heavily to expand. In the first quarter, Oracle operating cash flow of $21.53 billion was surpassed by $27.4 billion in capital expenditures. Oracle's free cash flow was negative for the last two quarters.

Constellation Research analyst Holger Mueller said:

"Oracle finds itself with the interesting challenge that it needs to invest to deliver on the revenue it has under contract for the future. Capex is up $20 billion year over year with negative free cash flow. As long as the data center build out does not hit any snags, Oracle gets the committed spend. There will be great quarters for Oracle investors to come. The interesting aspect is that none of the Oracle competitors have gone cash flow negative in a quarter. This fiscal year will be huge for Oracle and capacity gone live will be the quarterly KPI."

As for the outlook, Catz said second quarter revenue growth will be between 14% to 16% with non-GAAP earnings between $1.61 a share to $1.65 a share. Fiscal 2026 capital expenditures will be about $35 billion.  

Catz added that Oracle doesn't own buildings or land, but the equipment. "It's much cheaper than our competitors. We only put that equipment in when it's time, and we're generating revenue right away," she said. "It's asset pretty light. Some of our competitors like to own buildings. That's not really our specialty."

Here's what Ellison said on the conference call:

  • "There's a huge amount of demand for inferencing. All this money we're spending on training is going to have to be translated into products that are sold, which is all inferencing. And the inferencing market, again, is much larger than the training market."
  • "A lot of companies are saying we're big into AI because we're writing agents. Well, guess what? We're writing a bunch of agents too."
  • "AI is going to generate the computer programs called AI agents that will automate your sales and marketing processes. Let me repeat that. AI is going to automatically write the computer program that will then automate your sales processes and your legal processes and everything else." 
  • "We have gotten the entire Oracle Cloud, the whole thing, every feature, every function of the Oracle Cloud, down to something we can put into a handful of racks. We call it butterfly. It costs $6 million. So we can give you the we can give you a private version of the Oracle Cloud with every feature, every security feature, every function, everything we do, for $6 million." 
  • "We're an application company and a cloud infrastructure company, and therefore we build applications, and we'd like to be more efficient. And the way to be more efficient is to build AI application generators. And we have been doing that, and we the latest applications that we are building. We're not building them, they're being generated by AI."

 

Data to Decisions Tech Optimization Oracle Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Google Cloud: $106 billion in RPO, says Kurian

Google Cloud: $106 billion in RPO, says Kurian

Google Cloud CEO Thomas Kurian said the company had remaining performance obligations of $106 billion with half that sum converting to revenue in the next two years. If that RPO converts, Google Cloud revenue will be $58 billion by 2027.

Kurian was speaking at the Goldman Sachs Communacopia + Technology conference. He said Google Cloud is capturing customers wins at a faster clip with multiple ways of monetizing services. There's consumption, services and subscriptions and increasingly value-based models.

"We also monetize some of our products through value-based pricing. For example, some people use our customer service system say, "I want to pay for it by deflection rates that you deliver." Some people use our creative tools to create content, say, "I want to pay based on what conversion I'm seeing in my advertising system," he said.

Kurian added that Google Cloud is also successful upselling customers to consume new models and higher consumption quotas. He said:

"65% of our customers are already using our AI tools in a meaningful way. Those customers that use our AI tools typically end up using more of our products. For example, they use our data platform or our security tools. And on average, those that use our AI products use 1.5x as many products than those that are not yet using our AI tools. And that leads then customers who sign a commitment or a contract to over-attain it, meaning they spend more than they contracted for, which drives more revenue growth."

According to Kurian, Google Cloud is also focused on operating discipline to boost margins. Kurian said Google Cloud is being "super-efficient from the point of view of using our fleet and our machines so that we get capital efficiency."

Going forward, Google Cloud will continue to build out its suite of products, go-to-market team and infrastructure to become more efficient, said Kurian.

"To give you a sense of the scale, if you compare us to other hyperscalers, we are the only hyperscaler that offers our own systems and our own models, and we're not just reselling other people's stuff. The volume of tokens we process, twice other providers in half the time. So roughly 4x the volume. We have a lot of different companies using these AI models from companies creating digital products to using AI within their organization."

Data to Decisions Future of Work Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Why Microsoft's AI infrastructure deal with Nebius is savvy

Why Microsoft's AI infrastructure deal with Nebius is savvy

AI cloud provider Nebius will provide GPU infrastructure capacity to Microsoft in a deal valued at $17.4 billion with an option to spend up to $19.4 billion.

The deal means that Microsoft will be among the largest customers of both Nebius and CoreWeave, two AI cloud specialists. In 2024, about two-thirds of CoreWeave's revenue came from Microsoft.

In an SEC filing, Nebius said GPU capacity will be provided to Microsoft through its Vineland, NJ data center over five years. Nebius will raise more capital to provide the GPU services to Microsoft as well as use cash flow from the deal. Nebius said it would raise $2 billion in debt and float more shares to fund the Microsoft deal.

Microsoft is certainly building its own data centers, but is also leasing capacity from so-called neo cloud providers. Microsoft's approach to data center capacity appears to be more hybrid between build and lease. Meta, Google and Amazon Web Services are more in the build mode. Oracle is building AI data centers at rapid clip, but has also delivered services from within data centers run by the big three cloud hyperscale providers.

As long as demand for AI compute capacity is strong then how cloud providers are delivering GPU access won't matter. If demand dries up, Microsoft appears to be in a better position to cut capacity without being stuck with physical facilities. Of course, we all know demand for AI compute will never dry up (kidding, sort of).

In any case, Microsoft is managing its data center capacity in a way where it has options and won't be stuck holding the infrastructure bag. Microsoft in the fourth quarter said it has 400 data centers across 70 regions. In April, CEO Satya Nadella explained the company's approach:

“The reality is we've always been making adjustments to build, lease, what pace we build all through the last 10, 15 years, it's just that you all pay a lot more attention to what we do quarter-over-quarter nowadays.

Having said that, the key thing for us is to have our builds and leases be positioned for what the workload growth of the future. There's a demand part to it, there is the shape of the workload part to it, and there is a location part to it.

So you don't want to be upside down on having one big data center in one region when you have a global demand footprint. You don't want to be upside down when the shape of demand changes.

I need power in specific places so that we can either lease or build at the pace at which we want. And so that's the sort of plan that we're executing to."

That backdrop highlights why Microsoft's Nebius deal is a smart way to get the compute and power in the right place at the right time.

Key details of the Nebius deal include:

  • Either party can terminate the deal due in the event of material breach if not remedied in 60 days. Nebius has to meet agreed delivery dates for a GPU service and the company cannot provide alternative capacity.
  • Nebius has to confirm to Microsoft that it has secured additional financing to fund the infrastructure for the services.
  • Once funding confirmation is given to Microsoft the deal commences.

For Nebius, the Microsoft deal is huge. Arkady Volozh, CEO of Nebius, said the Microsoft deal complements the long-term committed contracts with AI labs and tech giants. "The economics of the deal are attractive in their own right, but, significantly, the deal will also help us to accelerate the growth of our AI cloud business even further in 2026 and beyond," said Volozh.

Indeed, Nebius' growth will accelerate from here. On Aug. 7, Nebius increased its annual run-rate revenue guidance to $1.1 billion from $900 for 2025. The company delivered second quarter revenue of $105.1 million, up 625% from a year ago, with net income of $584.4 million due to revaluing equity investments and a gain from discontinued operations. Nebius added that it was in the process of securing more than 1 GW of power by the end of 2026.

Constellation Research's take

Holger Mueller, an analyst at Constellation Research, said:

"This is a smart deal for both sides, as Microsoft balances out Capex needs, and Nebius is getting stable demand to build out capacity. But smart deals have potential downsides as well: Microsoft already has the most heterogenous data center landscape already (compared to its three key competitors) and is adding another level of complexity. It also has to find a way to get its [AI] software stack to at least run partially on Nebius. At some point this goes back to the Microsoft Windows DNA - build the software asset, partner with anyone including  GPU clouds (include CoreWeave here). For Nebius it may be the proverbial too big bite to chew off, and have repercussions on its other clients, who certainly will have concerns of being crowded out by a tech giant. It's nothing that both Microsoft and Nebius can't handle - but these are areas to watch - for their customers and partners."

Here's a look at Nebius' footprint.

Data to Decisions Tech Optimization Microsoft Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

LLM giants need to build apps, ecosystems to go with the models

LLM giants need to build apps, ecosystems to go with the models

The race is on to put enterprise applications around large language models (LLMs) and the stakes couldn't be higher for the likes of OpenAI and Anthropic as well as other foundation model players.

And there's a good reason for the focus on applications to surround LLMs. Pricing for LLMs will tank and foundational models are going commodity in a hurry. Simply put, generic LLMs are good enough for multiple enterprise use cases.

Consider that Microsoft became just the latest vendor to give the US government a sweet deal on software. Microsoft is offering the Feds its suite of productivity, cloud and AI services including Microsoft 365 Copilot for no cost for up to 12 months. And since Microsoft essentially resells OpenAI's ChatGPT undercut its partner and frenemy's $1 a year deal.

Get Constellation Insights newsletter

Microsoft can afford to play the long game because the model (often ChatGPT) is just part of the application buffet.

This application meets LLM reality isn't lost on the LLM providers, which need to prop up heady private market valuations. Here's a look at what LLM giants are doing and a dark horse that seems to be ahead of the application curve.

OpenAI

OpenAI's acquisition of Statsig is a big realization that the company needs to fast track its plans to build applications around ChatGPT. OpenAI has launched Codex, one of its first applications, and now will have Statsig in the fold. Statsig's platform focuses on A/B testing, feature flagging and feedback loops that move products into production.

Statsig CEO Vijave Raji made it clear that the company will continue to provide its services and invest in core products. Raji becomes OpenAI's CTO of applications and will report to Fidji Simo, CEO of OpenAI's applications unit.

Simo recently penned an introductory missive on OpenAI's application strategy. OpenAI CEO Sam Altman has noted that the company's enterprise business is surging, but the company's scaling plans revolve around consumer applications too. Altman seems to be cribbing a bit of Apple and a bit of Google in terms of business models with plans for consumer-business scale with enterprise extensions.

OpenAI also has its ChatGPT for Business and ChatGPT for Enterprise plans with the beginning of vertical extensions with ChatGPT for Government. The API Platform also drives revenue.

In many respects, OpenAI is starting to follow that enterprise software playbook with targeted efforts aimed at healthcare, financial senses and public sector. The challenge will be honing the sales ground game for industries as it scales GPT Team, Enterprise, Edu and Pro plans.

Anthropic

Anthropic is now valued at $183 billion after its latest funding round. The company is also on a $5 billion annual revenue run rate.

Focus on the enterprise is fueling that growth. Anthropic is closely following the enterprise software playbook and its tight partnership with AWS is embedding its Claude model family in businesses.

For instance, Anthropic recently hired Paul Smith, alum of ServiceNow, Microsoft and Salesforce, as Chief Commercial Officer. Anthropic also launched Claude for Financial Services.

Enterprise software companies typically go horizontal and then drill down into industries. Once you land a big customer in one vertical others often follow. Smith has built out the go-to-market efforts at ServiceNow and will do the same for Anthropic.

Anthropic has specialized Claude versions for code, customer support, education, financial services and government. Claude Code is on a $500 million annual revenue run rate.

Like OpenAI, Anthropic has plans for businesses including Claude Max, Claude Team and Claude Enterprise. Anthropic has been taking steps to give Claude a collaboration and future of work spin.

Cohere as the dark horse

Cohere, a Constellation Insights underwriter, doesn't play LLM leapfrog, but has been building out applications around its models, which are enterprise focused.

Cohere North is a collaboration platform worth watching. North recently launched and is focused on enterprise productivity. There's also a version of North for Banking.

Meanwhile, Cohere Compass is focused on enterprise search and discovery system designed to surface business insights. Cohere's models and associated tools are focused on enterprise pain points such as search quality and multimodal retrieval.

The company is focused on financial services, healthcare and life sciences, manufacturing, utilities and public sector verticals.

Cohere recently raised $500 million and hired a Chief AI Officer and CFO. Cohere said the funding will be used to accelerate agentic AI use cases in businesses and governments primarily through North. The company added that strategic partners such as Oracle, Dell, RBC, Bell, Fujitsu, LG CNS and SAP are using North as a platform.

Bottom line

For LLM players to even think about growing into their valuations, applications and developer ecosystems need to be scaled.

It appears that the foundation model players will crib some business inspiration from enterprise software providers. However, enterprise software giants have the sales ground games, entrenched technologies and corporate data stores. And if LLMs (and smaller more focused models) are commodities then then value will be in orchestration and automation potentially through AI agents.

The future of enterprise software is being rewritten, but the business models underneath are likely to look very familiar.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer