Results

Arm launches Lumex, an AI platform for devices with data center implications

Arm launches Lumex, an AI platform for devices with data center implications

Arm launches Lumex, its latest platform for on-device AI and a big bet that inference will move from the cloud to the edge in hybrid deployments.

Lumex is designed for premium mobile devices, but will also play into AI data center workloads. The more AI inferencing can be moved to the devices in your pocket, the less enterprises and developers will have to invest in cloud compute and AI infrastructure.

Chris Bergey, VP and GM of Arm's client business, said advances in large language models (LLMs) and agentic AI are making mobile devices more of a companion with high expectations for experiences.

"We have moved from Ai being a parlor trick to influencing how things get done. People of all ages are using these experiences every day, embedded seamlessly into apps, devices and systems they rely on, but we have only started to see how AI will shape our future expectations," said Bergey in a briefing. Bergey also outlined Lumex in a blog post.

He added that AI is "too essential, too interactive and too valuable" to be derailed by network glitches. When instant response is the expectations, local compute matters. "AI has to move to the device. Why? Because relying on Cloud to scale isn't sustainable. It's too expensive for developers and too slow for users and too concerning for privacy," said Bergey.

Arm is targeting smartphones and consumer devices with Lumex, but remember Arm has broad data center ambitions. Bergey added that the most advanced models will be in the cloud, but workloads that can move down to the device.

Indeed, Geraint North, Fellow of AI and Developer Platforms at Arm, said AI costs are going to matter. North said:

"One of the things with developers is that everyone is in a user acquisition phase right now before they're in the 'we've got to make this profitable' phase. Developers are going to say, 'I can't just spend all this on cloud compute resource' and ask how much they can offload? (Offloading compute) will become increasingly important once people are under pressure for profitability, which is not necessarily the case for many of the AI app developers right now."

The evolution will be that small models will run on device and advanced models will stay in the cloud. Enterprises and developers will optimize for performance and costs.

Lumex tech details

Lumex represents Arm's platform strategy with an "AI-first platform design" that's built from the ground up for AI. Lumex features better performance, tighter integration and a more scalable architecture.

"We're talking about tightly integrated compute, software and tools optimized for the next generation of mobile workloads, and it's built for AI with new architectural features and optimized implementations for the best performance," said James McNiven, who leads Arm's product management team for the company's client business.

The Lumex platform also gives Arm the ability to integrate the company's technology quickly.

Key items:

  • Lumex has SME2 integration throughout. SME2 (Scalable Matrix Extension Version 2) is a hardware feature and extension of the Arm v9-A architecture that accelerates advanced operations for AI inference, HPC and other intense workloads.
  • Arm's C1 Ultra CPU provides a 25% performance gain and its C1 Pro, which is optimized for efficiency, has a 12% improvement in energy usage.
  • The Mali Ultra GPU delivers a 20% gain in performance with 9% better energy efficiency as well as 20% faster AI inferencing.
  • SME2 integration provides 5x acceleration in AI performance and 3x efficiency for AI experiences.
  • Lumex is optimized for 3nm process technologies and can be manufactured in multiple foundries.

Data to Decisions Future of Work Next-Generation Customer Experience Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Oracle Q1 misses, but sees OCI revenue surging over next 4 years

Oracle Q1 misses, but sees OCI revenue surging over next 4 years

Oracle's first quarter earnings and revenue fell short of expectations, but remaining performance obligations growth of 359% overshadowed the results.

The company delivered first quarter earnings of $1.01 a share on revenue of $14.9 billion, up 12% from a year ago. Non-GAAP earnings in the quarter were $1.47 a share.

Wall Street was expecting Oracle to report first quarter non-GAAP earnings of $1.48 a share on revenue of $15.04 billion.

Oracle said cloud revenue overall was $7.2 billion, up 28% from a year ago. Oracle's infrastructure as a service business was $3.3 billion in the first quarter, up 55% from a year ago. Cloud application revenue (SaaS) was $3.8 billion, up 11% from a year ago.

Although the most recent quarter was mixed, Oracle's future demand looks strong. Oracle said remaining performance obligations (RPO) in the first quarter was $455 billion, up 359% from a year ago. In its SEC filing for the quarter Oracle said:

"Remaining performance obligations were $455.3 billion as of August 31, 2025, of which we expect to recognize approximately 10% as revenues over the next twelve months, 25% over the subsequent month 13 to month 36, 34% over the subsequent month 37 to month 60 and the remainder thereafter."

CEO Safra Catz said Oracle signed "four multi-billion-dollar contracts with three different customers in Q1." She added:

"It was an astonishing quarter—and demand for Oracle Cloud Infrastructure continues to build. Over the next few months, we expect to sign-up several additional multi-billion-dollar customers and RPO is likely to exceed half-a-trillion dollars. The scale of our recent RPO growth enables us to make a large upward revision to the Cloud Infrastructure portion of Oracle's overall financial plan which we will be presenting in detail next month at the Financial Analyst Meeting."

Catz said Oracle is expecting Oracle Cloud Infrastructure revenue to be $18 billion this fiscal year, up 77%, and then increase to $32 billion, $73 billion, $114 billion and $144 billion over the next four years. "Most of the revenue in this 5-year forecast is already booked in our reported RPO," said Catz.

Today, Oracle's cloud revenue run rate is pushing $29 billion. For comparison, AWS has an annual revenue run rate of $124 billion compared to $50 billion for Google Cloud and $75 billion in annual sales for Microsoft Azure.

CTO Larry Ellison said multicloud database revenue from Amazon, Google and Microsoft grew 1,529% in the first quarter compared to a year ago. "We expect multicloud revenue to grow substantially every quarter for several years as we deliver another 37 datacenters to our three hyperscaler partners, for a total of 71," said Ellison.

Ellison said the company will introduce the following at Oracle AI World: Oracle AI Database, which will allow customers to use any LLM including Google Gemini, OpenAI ChatGPT, xAI's Grok and others--on top of Oracle Database.

While Oracle's future demand seems secure, it's worth noting that the company's free cash flow is taking a hit as it spends heavily to expand. In the first quarter, Oracle operating cash flow of $21.53 billion was surpassed by $27.4 billion in capital expenditures. Oracle's free cash flow was negative for the last two quarters.

Constellation Research analyst Holger Mueller said:

"Oracle finds itself with the interesting challenge that it needs to invest to deliver on the revenue it has under contract for the future. Capex is up $20 billion year over year with negative free cash flow. As long as the data center build out does not hit any snags, Oracle gets the committed spend. There will be great quarters for Oracle investors to come. The interesting aspect is that none of the Oracle competitors have gone cash flow negative in a quarter. This fiscal year will be huge for Oracle and capacity gone live will be the quarterly KPI."

As for the outlook, Catz said second quarter revenue growth will be between 14% to 16% with non-GAAP earnings between $1.61 a share to $1.65 a share. Fiscal 2026 capital expenditures will be about $35 billion.  

Catz added that Oracle doesn't own buildings or land, but the equipment. "It's much cheaper than our competitors. We only put that equipment in when it's time, and we're generating revenue right away," she said. "It's asset pretty light. Some of our competitors like to own buildings. That's not really our specialty."

Here's what Ellison said on the conference call:

  • "There's a huge amount of demand for inferencing. All this money we're spending on training is going to have to be translated into products that are sold, which is all inferencing. And the inferencing market, again, is much larger than the training market."
  • "A lot of companies are saying we're big into AI because we're writing agents. Well, guess what? We're writing a bunch of agents too."
  • "AI is going to generate the computer programs called AI agents that will automate your sales and marketing processes. Let me repeat that. AI is going to automatically write the computer program that will then automate your sales processes and your legal processes and everything else." 
  • "We have gotten the entire Oracle Cloud, the whole thing, every feature, every function of the Oracle Cloud, down to something we can put into a handful of racks. We call it butterfly. It costs $6 million. So we can give you the we can give you a private version of the Oracle Cloud with every feature, every security feature, every function, everything we do, for $6 million." 
  • "We're an application company and a cloud infrastructure company, and therefore we build applications, and we'd like to be more efficient. And the way to be more efficient is to build AI application generators. And we have been doing that, and we the latest applications that we are building. We're not building them, they're being generated by AI."

 

Data to Decisions Tech Optimization Oracle Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Google Cloud: $106 billion in RPO, says Kurian

Google Cloud: $106 billion in RPO, says Kurian

Google Cloud CEO Thomas Kurian said the company had remaining performance obligations of $106 billion with half that sum converting to revenue in the next two years. If that RPO converts, Google Cloud revenue will be $58 billion by 2027.

Kurian was speaking at the Goldman Sachs Communacopia + Technology conference. He said Google Cloud is capturing customers wins at a faster clip with multiple ways of monetizing services. There's consumption, services and subscriptions and increasingly value-based models.

"We also monetize some of our products through value-based pricing. For example, some people use our customer service system say, "I want to pay for it by deflection rates that you deliver." Some people use our creative tools to create content, say, "I want to pay based on what conversion I'm seeing in my advertising system," he said.

Kurian added that Google Cloud is also successful upselling customers to consume new models and higher consumption quotas. He said:

"65% of our customers are already using our AI tools in a meaningful way. Those customers that use our AI tools typically end up using more of our products. For example, they use our data platform or our security tools. And on average, those that use our AI products use 1.5x as many products than those that are not yet using our AI tools. And that leads then customers who sign a commitment or a contract to over-attain it, meaning they spend more than they contracted for, which drives more revenue growth."

According to Kurian, Google Cloud is also focused on operating discipline to boost margins. Kurian said Google Cloud is being "super-efficient from the point of view of using our fleet and our machines so that we get capital efficiency."

Going forward, Google Cloud will continue to build out its suite of products, go-to-market team and infrastructure to become more efficient, said Kurian.

"To give you a sense of the scale, if you compare us to other hyperscalers, we are the only hyperscaler that offers our own systems and our own models, and we're not just reselling other people's stuff. The volume of tokens we process, twice other providers in half the time. So roughly 4x the volume. We have a lot of different companies using these AI models from companies creating digital products to using AI within their organization."

Data to Decisions Future of Work Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Why Microsoft's AI infrastructure deal with Nebius is savvy

Why Microsoft's AI infrastructure deal with Nebius is savvy

AI cloud provider Nebius will provide GPU infrastructure capacity to Microsoft in a deal valued at $17.4 billion with an option to spend up to $19.4 billion.

The deal means that Microsoft will be among the largest customers of both Nebius and CoreWeave, two AI cloud specialists. In 2024, about two-thirds of CoreWeave's revenue came from Microsoft.

In an SEC filing, Nebius said GPU capacity will be provided to Microsoft through its Vineland, NJ data center over five years. Nebius will raise more capital to provide the GPU services to Microsoft as well as use cash flow from the deal. Nebius said it would raise $2 billion in debt and float more shares to fund the Microsoft deal.

Microsoft is certainly building its own data centers, but is also leasing capacity from so-called neo cloud providers. Microsoft's approach to data center capacity appears to be more hybrid between build and lease. Meta, Google and Amazon Web Services are more in the build mode. Oracle is building AI data centers at rapid clip, but has also delivered services from within data centers run by the big three cloud hyperscale providers.

As long as demand for AI compute capacity is strong then how cloud providers are delivering GPU access won't matter. If demand dries up, Microsoft appears to be in a better position to cut capacity without being stuck with physical facilities. Of course, we all know demand for AI compute will never dry up (kidding, sort of).

In any case, Microsoft is managing its data center capacity in a way where it has options and won't be stuck holding the infrastructure bag. Microsoft in the fourth quarter said it has 400 data centers across 70 regions. In April, CEO Satya Nadella explained the company's approach:

“The reality is we've always been making adjustments to build, lease, what pace we build all through the last 10, 15 years, it's just that you all pay a lot more attention to what we do quarter-over-quarter nowadays.

Having said that, the key thing for us is to have our builds and leases be positioned for what the workload growth of the future. There's a demand part to it, there is the shape of the workload part to it, and there is a location part to it.

So you don't want to be upside down on having one big data center in one region when you have a global demand footprint. You don't want to be upside down when the shape of demand changes.

I need power in specific places so that we can either lease or build at the pace at which we want. And so that's the sort of plan that we're executing to."

That backdrop highlights why Microsoft's Nebius deal is a smart way to get the compute and power in the right place at the right time.

Key details of the Nebius deal include:

  • Either party can terminate the deal due in the event of material breach if not remedied in 60 days. Nebius has to meet agreed delivery dates for a GPU service and the company cannot provide alternative capacity.
  • Nebius has to confirm to Microsoft that it has secured additional financing to fund the infrastructure for the services.
  • Once funding confirmation is given to Microsoft the deal commences.

For Nebius, the Microsoft deal is huge. Arkady Volozh, CEO of Nebius, said the Microsoft deal complements the long-term committed contracts with AI labs and tech giants. "The economics of the deal are attractive in their own right, but, significantly, the deal will also help us to accelerate the growth of our AI cloud business even further in 2026 and beyond," said Volozh.

Indeed, Nebius' growth will accelerate from here. On Aug. 7, Nebius increased its annual run-rate revenue guidance to $1.1 billion from $900 for 2025. The company delivered second quarter revenue of $105.1 million, up 625% from a year ago, with net income of $584.4 million due to revaluing equity investments and a gain from discontinued operations. Nebius added that it was in the process of securing more than 1 GW of power by the end of 2026.

Constellation Research's take

Holger Mueller, an analyst at Constellation Research, said:

"This is a smart deal for both sides, as Microsoft balances out Capex needs, and Nebius is getting stable demand to build out capacity. But smart deals have potential downsides as well: Microsoft already has the most heterogenous data center landscape already (compared to its three key competitors) and is adding another level of complexity. It also has to find a way to get its [AI] software stack to at least run partially on Nebius. At some point this goes back to the Microsoft Windows DNA - build the software asset, partner with anyone including  GPU clouds (include CoreWeave here). For Nebius it may be the proverbial too big bite to chew off, and have repercussions on its other clients, who certainly will have concerns of being crowded out by a tech giant. It's nothing that both Microsoft and Nebius can't handle - but these are areas to watch - for their customers and partners."

Here's a look at Nebius' footprint.

Data to Decisions Tech Optimization Microsoft Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

LLM giants need to build apps, ecosystems to go with the models

LLM giants need to build apps, ecosystems to go with the models

The race is on to put enterprise applications around large language models (LLMs) and the stakes couldn't be higher for the likes of OpenAI and Anthropic as well as other foundation model players.

And there's a good reason for the focus on applications to surround LLMs. Pricing for LLMs will tank and foundational models are going commodity in a hurry. Simply put, generic LLMs are good enough for multiple enterprise use cases.

Consider that Microsoft became just the latest vendor to give the US government a sweet deal on software. Microsoft is offering the Feds its suite of productivity, cloud and AI services including Microsoft 365 Copilot for no cost for up to 12 months. And since Microsoft essentially resells OpenAI's ChatGPT undercut its partner and frenemy's $1 a year deal.

Get Constellation Insights newsletter

Microsoft can afford to play the long game because the model (often ChatGPT) is just part of the application buffet.

This application meets LLM reality isn't lost on the LLM providers, which need to prop up heady private market valuations. Here's a look at what LLM giants are doing and a dark horse that seems to be ahead of the application curve.

OpenAI

OpenAI's acquisition of Statsig is a big realization that the company needs to fast track its plans to build applications around ChatGPT. OpenAI has launched Codex, one of its first applications, and now will have Statsig in the fold. Statsig's platform focuses on A/B testing, feature flagging and feedback loops that move products into production.

Statsig CEO Vijave Raji made it clear that the company will continue to provide its services and invest in core products. Raji becomes OpenAI's CTO of applications and will report to Fidji Simo, CEO of OpenAI's applications unit.

Simo recently penned an introductory missive on OpenAI's application strategy. OpenAI CEO Sam Altman has noted that the company's enterprise business is surging, but the company's scaling plans revolve around consumer applications too. Altman seems to be cribbing a bit of Apple and a bit of Google in terms of business models with plans for consumer-business scale with enterprise extensions.

OpenAI also has its ChatGPT for Business and ChatGPT for Enterprise plans with the beginning of vertical extensions with ChatGPT for Government. The API Platform also drives revenue.

In many respects, OpenAI is starting to follow that enterprise software playbook with targeted efforts aimed at healthcare, financial senses and public sector. The challenge will be honing the sales ground game for industries as it scales GPT Team, Enterprise, Edu and Pro plans.

Anthropic

Anthropic is now valued at $183 billion after its latest funding round. The company is also on a $5 billion annual revenue run rate.

Focus on the enterprise is fueling that growth. Anthropic is closely following the enterprise software playbook and its tight partnership with AWS is embedding its Claude model family in businesses.

For instance, Anthropic recently hired Paul Smith, alum of ServiceNow, Microsoft and Salesforce, as Chief Commercial Officer. Anthropic also launched Claude for Financial Services.

Enterprise software companies typically go horizontal and then drill down into industries. Once you land a big customer in one vertical others often follow. Smith has built out the go-to-market efforts at ServiceNow and will do the same for Anthropic.

Anthropic has specialized Claude versions for code, customer support, education, financial services and government. Claude Code is on a $500 million annual revenue run rate.

Like OpenAI, Anthropic has plans for businesses including Claude Max, Claude Team and Claude Enterprise. Anthropic has been taking steps to give Claude a collaboration and future of work spin.

Cohere as the dark horse

Cohere, a Constellation Insights underwriter, doesn't play LLM leapfrog, but has been building out applications around its models, which are enterprise focused.

Cohere North is a collaboration platform worth watching. North recently launched and is focused on enterprise productivity. There's also a version of North for Banking.

Meanwhile, Cohere Compass is focused on enterprise search and discovery system designed to surface business insights. Cohere's models and associated tools are focused on enterprise pain points such as search quality and multimodal retrieval.

The company is focused on financial services, healthcare and life sciences, manufacturing, utilities and public sector verticals.

Cohere recently raised $500 million and hired a Chief AI Officer and CFO. Cohere said the funding will be used to accelerate agentic AI use cases in businesses and governments primarily through North. The company added that strategic partners such as Oracle, Dell, RBC, Bell, Fujitsu, LG CNS and SAP are using North as a platform.

Bottom line

For LLM players to even think about growing into their valuations, applications and developer ecosystems need to be scaled.

It appears that the foundation model players will crib some business inspiration from enterprise software providers. However, enterprise software giants have the sales ground games, entrenched technologies and corporate data stores. And if LLMs (and smaller more focused models) are commodities then then value will be in orchestration and automation potentially through AI agents.

The future of enterprise software is being rewritten, but the business models underneath are likely to look very familiar.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

HubSpot’s strategy: Use AI to deliver work, not software

HubSpot’s strategy: Use AI to deliver work, not software

HubSpot is betting that a series of new services, Data Hub, new Breeze Agents, Breeze Marketplace and Studio, CPQ in Commerce Hub and Loop, a playbook for inbound marketers, as well as a hefty dose of AI and contextual data will differentiate the company.

At its Inbound 2025 conference, HubSpot outlined the following at a high level. The company rolled out more than 200 updates to its platform designed to build AI and human hybrid teams.

Here's a look.

  • Data Hub brings together data from external sources and combines them with AI tools to connect, clean and act on data.
  • Smart CRM gets updates to bring visualization, conversational and intent enrichment tools and insights.
  • Marketing Hub gets AI-driven segmentation, personalized messages based on CRM data and AI engine optimization blueprints.
  • CPQ in Commerce Hub AI-powered quote creation and an agent to close deals.
  • More than 15 Breeze Agents including Data Agent, Customer Agent, Prospecting Agent and others designed to leverage context and unified data stores and connect to various models including Google Cloud Gemini and OpenAI ChatGPT. HubSpot is built on AWS so would have access to models in Amazon Bedrock too. HubSpot launched its first Breeze Agents last year. See: HubSpot launches Breeze AI agents, Breeze Intelligence for data enrichment
  • Loop, which is an AI-driven playbook that aims to reinvent the marketing funnel. The playbook, which leverages various HubSpot services, revolves around expressing tastes, tone and point of view, tailoring messaging with AI, amplifying content with AI engine optimization, and evolving and iterating.

At HubSpot's investor day, CEO Yamini Rangan laid out the strategy.

Research: Martin Schneider on HubSpot’s strengths and weaknesses

"We are transforming our platform to be an AI-powered customer platform. We have rich customer context, which is our platform advantage. We are reimagining marketing beyond search with a new playbook, products that support it and an ecosystem behind it. And we are scaling upmarket and down-market to drive durable growth. And we are transforming as a company to be AI first," said Rangan.

Also see: Constellation Research’s Liz Miller posted live from the keynote at HubSpot’s Inbound conference. Martin Schneider highlighted the news in a LinkedIn video.

HubSpot's positioning is worth noting given Salesforce's Dreamforce conference will feature similar verbiage with Agentforce.

Rangan added that the shift for enterprise software vendors is profound. She said:

"Customers today expect us to resolve tickets, write blogs, schedule meetings, just like they would a coworker. So customers are expecting not just software that does the work for them, but actually does help them get more accomplished to grow.

That's a big shift, and that unlocks a huge opportunity for HubSpot. And when we look at this opportunity, -- we are moving from delivering software to delivering work. We're no longer limited by the software budgets. We are now tapping into the work budgets."

Moving upstream and downstream

Rangan laid out a heady goal for HubSpot--become the No. 1 AI-powered customer platform for scaling companies.

HubSpot made its name by building for SMBs, but is increasingly moving upstream. AI gives HubSpot a shot at larger enterprises.

The company's previously built context for customers via structured data, company contacts, tickets, and deals. Now AI brings structured data, unstructured conversations and external intent signals to the mix, explained Rangan.

Rangan said HubSpot is gunning for three layers.

  • A context layer that knows your customer.
  • An action layer to do work.
  • And an orchestration layer that connects everything together.

For AI to truly work, Rangan said all three layers have to be interconnected via APIs, Model Context Protocol (MCP) and connectors. "Now you put all of this together, this is our AI-powered customer platform, delivering value for customers," said Rangan.

HubSpot said customers are adopting the AI tools and strategies behind them. HubSpot's ability to provide context to customers will be what's durable.

"Data is what AI needs to do work, not just guess about the work to be done. And HubSpot has 19 years, 270,000 customers worth of those touch points," said Rangan. "Every campaign launched, every e-mail sent, every deal closed, every CPQ transaction across the entire customer journey. That is the data that AI needs in order to do great work. And we also need the user context. So AI knows who is asking and what permissions they can take based on the role."

Rangan said HubSpot's approach is resonating with larger enterprises as a way to consolidate legacy CRM systems and deliver better total cost of ownership.

HubSpot's focus on delivering value and use cases before monetization is also helping. HubSpot sells hubs, seats and credits that are primarily used for AI agents.

Key points about monetization:

  • Persona seats provide access to hubs like Sales Hub and Service Hub. As companies grow they buy more hubs and seats.
  • Core seats are sold for platform access and the ability to create custom objects and workflows. Breeze Assistant and data and contact enrichment are included in core seats.
  • Credits revolve around usage based pricing for AI agent actions and other usage on the platform.

Rangan said the plan for HubSpot is to scale customers upmarket and down-market. "Going upmarket has been a multiyear focus for us. We want to build powerful tools that are super easy to use, and we have had consistent set of innovation, opening the markets for sensitive data, journey orchestration, multi-account management, new global data centers. All of this innovation proves that we scale with businesses," said Rangan.

HubSpot will also rely on integrators and partners.

The down-market strategy is to drive volume, deliver volume and grow wallet share with a freemium model designed for small companies. "When they get into a Starter or Pro, we deliver compelling value. We become that customer operating system that customers depend on. And when they grow, we grow," said Rangan.

Few enterprise software vendors have been able to do both upmarket and down-market at the same time. Salesforce CEO Marc Benioff recently noted that the company is also looking for growth from midmarket enterprises as long as the giant companies.

HubSpot CFO Kathryn Bueker said the company can cater to multiple enterprises to build durable and efficient growth.

Since 2021, HubSpot has delivered a compound annual revenue growth rate of 24%. HubSpot is projecting 2025 revenue of $3.1 billion, up 17% in constant currency.

"We take a platform-oriented approach to address our market opportunity. And we believe that our platform is the key driver of our upmarket and down-market momentum as well as our strong customer retention," said Bueker. "New and existing customers are consolidating their go-to-market technology stack on HubSpot."

Starter customers are about half of HubSpot's total customer base.

Leveraging AI internally

HubSpot executives noted multiple ways that the company is leveraging AI for customer support and sales and marketing.

The more interesting point was made by HubSpot CTO Dharmesh Shah, who confirmed the good-enough LLM trend that other enterprises are noting.

Shah was asked about inference cost to provide AI agents to HubSpot customers. Shah said:

"We get very excited about the newer models and deep reasoning and deep research and all these kind of high-end features that are in Opus and GPT 5 Pro. But e-mail is a good example. The baseline capability that we need for a vast majority of go-to-market use cases are handled by something like a GPT 3.5 or a 4.0. So we don't need the frontier model capability for a vast majority of these use cases. And the cost of like GPT 4.0 has gone down 250x from the time it was released. That slope still goes down. Yes, the advanced models are getting more expensive, but most of the models we need for most of the work that we do, including e-mail, is not those frontier models."

Bueker noted that HubSpot hasn't seen any material cost of goods sold impact from use cases like personalizing emails. There's a team at HubSpot solely focused on the drivers of AI costs and optimization of the platform.

In addition, HubSpot is using AI to drive productivity and those savings are growing R&D spending.

"For sales and marketing, AI tooling and improved rep productivity upmarket, along with better conversion efficiency at the low end will be key drivers of S&M leverage. We will realize modest additional gains in G&A by leaning into AI and automation. As I've said in the past, we may see a bit more or less leverage in any given year depending on the opportunities we see, but we will stay on track to hit our interim and long-term margin targets," said Bueker.

The looming SaaS vs agentic question

No enterprise software vendor these days can get away from the question about whether agentic AI will eat software.

Benioff had his point of view and said SaaS won't be eaten by LLMs.

Rangan said SaaS may change, but software isn't going anywhere.

"SaaS effectively was a kind of deployment and business model kind of transformation, not that big of a like a technology transformation. What endures is software. Software is a high margin, high leverage, you can put investment and solve a bunch of customer problems. We have the largest opportunity as an industry in software than we've ever had before. The business models will change. I think SaaS in its purest form is unlikely to remain the way it is right now. That's why we see the hybrid pricing models.

But I'm just super bullish about the opportunity this creates because now we can solve problems with software that we were never able to do before. Before we built tools for humans to use, now we can actually do the work. The value that software is going to produce over the coming decades is like orders of magnitude higher. The TAMs are just going to be bigger, and we're just starting to see the early innings of that game. But I don't think software is dead. SaaS as a pure business model might transform over time, but software as a way to make money and put capital to work is going to be amazing."

Data to Decisions Future of Work Marketing Transformation Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity Hubspot Marketing B2B B2C CX Customer Experience EX Employee Experience AI ML Generative AI Analytics Automation Cloud Digital Transformation Disruptive Technology Growth eCommerce Enterprise Software Next Gen Apps Social Customer Service Content Management Collaboration Machine Learning LLMs Agentic AI Robotics Quantum Computing Enterprise IT Enterprise Acceleration IoT Blockchain Leadership VR Chief Information Officer Chief Marketing Officer Chief Customer Officer Chief People Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Broadcom delivers a strong Q3, cites custom AI chips, networking, VMware

Broadcom delivers a strong Q3, cites custom AI chips, networking, VMware

Broadcom reported better-than-expected third quarter results and cited strong demand for custom AI accelerators, networking and VMware.

The company reported third quarter earnings of $4.14 billion, or 85 cents a share, on revenue of $15.95 billion, up 22% from a year ago. Non-GAAP earnings were $1.69 a share.

Wall Street was expecting Broadcom to report non-GAAP earnings of $1.66 a share on revenue of $15.82 billion.

CEO Hock Tan said third quarter AI revenue was $5.2 billion, up 63% from a year ago. "We expect growth in AI semiconductor revenue to accelerate to $6.2 billion in Q4, delivering eleven consecutive quarters of growth, as our customers continue to strongly invest," said Tan.

Broadcom is a design partner for Google's Tensor Processor Units. Google announced its TPUs in 2016.

In the quarter, semiconductors represented 57% of sales with revenue of $9.17 billion, up 26% from year ago. Infrastructure software was 43% of sales with revenue of $6.79 billion, up 17% from a year ago.

As for the outlook, Broadcom projected fourth quarter revenue of $17.4 billion with adjusted EBITDA at 67% of projected revenue.

On a conference call with analysts, Tan said:

  • "Demand for custom AI accelerators from our 3 customers continue to grow as each of them journeys at their own pace towards compute self-sufficiency. And progressively, we continue to gain share with these customers."
  • "We have been working with other prospects on their own AI accelerators. Last quarter, one of these prospects released production orders to Broadcom, and we have accordingly characterized them as a qualified customer for XPUs and, in fact, have secured over $10 billion of orders of AI racks based on our XPUs."
  • "We know the biggest challenge to deploying larger clusters of compute for generative AI will be in networking. And for the past 20 years, Broadcom has developed for Ethernet networking that is entirely applicable to the challenges of scale up, scale out and scale across in generative AI."
  • VMware: "First phase is convincing people to convert from perpetual subscription and so doing purchase VCF (VMware Cloud Foundation). Second phase now is make that purchase they made on VCF create the value they look for in private cloud on their premise, on their IT data center. That's what's happening. And that will sustain for quite a while because on top of that, we will start selling advanced services, security, disaster recovery, even AI, running AI workloads on it."

Constellation Research analyst Holger Mueller said:

"Broadcom is on a roll. Not only does the vendor seem to be affected the competition trying to replace VMware, but it also growing nicely with its custom AI chips. This quarter more than a third of Broadcom revenue will come from custom AI chips.  Notably these chips are desired by the cloud providers, in contrast to their initial posture towards AI giant Nvidia. Remarkably, Hock Tan and team delivered the 20% revenue growth with no increase in selling, general and administrative expenses--something very rare in technology companies. If things go well in Q4 Broadcom will have record revenue and profitability for the fiscal year."

Data to Decisions Tech Optimization vmware Chief Information Officer

ServiceNow offers US government discounts as high as 70% on ITSM

ServiceNow offers US government discounts as high as 70% on ITSM

ServiceNow will give the US government discounts as high as 70% off list prices for upgrades to its Information Technology Service Management (ITSM) Pro and ITSM Pro Plus bundle.

The ServiceNow discounts, announced by the US General Services Administration, land as vendors line up to offer discounts to the US government for applications and AI services. Microsoft became just the latest vendor to give the US government a sweet deal on software. Microsoft is offering the Feds its suite of productivity, cloud and AI services including Microsoft 365 Copilot for no cost for up to 12 months.

Meanwhile, Google Cloud offered the US government discounts on Gemini and before that OpenAI and Anthropic lined up discounts. LLMs on sale: What happens when OpenAI, Anthropic offer Feds value meal pricing?

In 2025, the GSA announced agreements with AWS, DocuSign, Oracle, Elastic, Salesforce, Adobe, Google and Microsoft.

The ServiceNow discounts with the GSA also land a day after Salesforce touted government traction with Agentforce and said it would be going after the ITSM market. Key points about the GSA-ServiceNow deal include:

ITSM Pro and Pro Plus will be available at a 70% discount from the unrestricted user list price through September 2028. The bundle is designed for faster adoption of AI features.

ITSM Pro upgrade as a standalone option are available at a 40% discount off the Pro unrestricted user list price through September 2026.

 

 

Data to Decisions Future of Work servicenow Chief Information Officer

Quantinuum valued at $10 billion after $600 million venture round

Quantinuum valued at $10 billion after $600 million venture round

Quantinuum raised $600 million in venture funding led by Nvidia's venture capital arm, Quanta Computer and QED for a valuation of $10 billion.

The quantum computing company said that existing shareholders JPMorganChase, Mitsui, Amgen, Cambridge Quantum Holdings, Serendipity Capital and Honeywell also invested in the latest venture round.

Quantinuum has been busy building out its quantum software stack and launch of Helios, its next-generation system. The company is aiming to be among the first to deliver universal fault-tolerant quantum systems.

Along with the funding, Quantinuum said it will work with Nvidia in its Accelerated Quantum Research Center.

Quantinuum has doubled its valuation since January 2024.

"Nvidia has quickly turned from a skeptical observer to an investor in quantum. With Quantinuum it covers the laser gate technology approach to quantum as it is keeping tabs on the different technology approaches," said Constellation Research analyst Holger Mueller. "One can expect Nvidia to do the same for the other platforms – and future will tell if Nvidia was able to pick the winners."

Recent Quantinuum developments include:

Data to Decisions Innovation & Product-led Growth Tech Optimization Quantum Computing Chief Information Officer

Atlassian buys The Browser Co. for $610 million

Atlassian buys The Browser Co. for $610 million

Atlassian said it will acquire the Browser Company of New York, which is the company behind the Dia and Arc browsers, in a $610 million bet that enterprises need secure browsers designed for SaaS and AI applications.

The purchase comes as browsers are seen as critical software for everything from knowledge work to agentic AI.

Atlassian said that browsers are typically built for consumers and not workers that need to get work done in sometimes dozens of tabs. Atlassian is betting that the browser can enable new collaborative workflows.

According to Atlassian CEO Mike Cannon-Brookes, the plan is to make The Browser Company's Dia the AI browser for work optimized for SaaS with security built in and personal work memory.

In a statement, Cannon-Brookes said:

"By combining The Browser Company’s passion for building beloved browsers with our two decades of understanding how knowledge workers operate, we see a huge opportunity to transform the way work gets done. Together, we'll create an AI-powered browser optimized for the many SaaS applications living in tabs."

As for distribution, Atlassian can bring Dia to its more than 300,000 customers and more than 2.3 million active users of AI on its platform.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience New C-Suite Tech Optimization Chief Information Officer Chief Experience Officer