This list celebrates changemakers creating meaningful impact through leadership, innovation, fresh perspectives, transformative mindsets, and lessons that resonate far beyond the workplace.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
The European Commission has formally opened an investigation of SAP and its maintenance and support practices for on-premises deployments in Europe.
In a statement, the EC said it has started an investigation into whether SAP "may have distorted competition in the aftermarket for maintenance and support services" for its on-premises ERP applications.
SAP's requirement that customers seek maintenance and support from the company for on-premises ERP under the same pricing for all software and preventing enterprises from mixing and matching services from other suppliers.
Preventing customers from terminating maintenance and support for unused software licenses.
SAP systematically extending the duration of the initial term of on-premises ERP licenses so customers can't terminate maintenance and support.
SAP charging back-maintenance fees to customers that subscribe to SAP maintenance and support after a period of absence.
"These proceedings address some areas of our on-premise maintenance and support policies, which are based on long-standing standards that are common across the global software sector. SAP believes that its policies and actions are fully in line with competition rules. However, we take the issues raised seriously and we are working closely with the EU Commission to resolve them.
We do not anticipate the engagement with the European Commission to result in material impacts on our financial performance."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Hitachi Vantara is embedding AI agents throughout its storage systems, seeing customers embrace more hybrid cloud and on-premise AI architectures and betting on AI at the edge and sovereign AI as growth markets.
Those are some of the takeaways from Octavian Tanase, Chief Product Officer at Hitachi Vantara.
Constellation Insights caught up with Tanase at Hitachi Vantara's Analyst Live 2025 event in Arlington, VA. At the analyst meeting, Hitachi Vantara highlighted its strategy, customer enterprise AI and industrial AI use cases, hybrid cloud and AI efforts, collaboration with partners including Cisco and Nvidia and how its storage and data platform is leveraging the One Hitachi strategy. Here's a look at the takeaways from the conversation with Tanase.
Enterprise AI adoption. "We see a lot of demand from enterprises looking to get insights out of that data, and use AI to improve productivity," he said. "There's a rush right now to build autonomous modules that will take a business workflow and solve and anticipate problems in an enterprise. Customers are looking to bring in a large language model and train and fine tune with enterprise data before they deploy for inference."
Integrated software with hardware. "We've always seen ourselves as a systems company that builds both software and systems for storage and data management. Storage has been commoditized, and there is more value that can be delivered in software to use not only the data for an application, but to run analytics on that data to get more insights from the data. This is an area of investment for us," said Tanase.
Hitachi Vantara's agentic AI strategy. Tanase said Hitachi Vantara is using AI agents internally for productivity and within the products leveraging it for more autonomy. "We are in the business of providing infrastructure for AI data pipelines that include storage, compute, networking, security and so forth. We are embedding capabilities around the data reduction or data tiering or data classification. These are all areas where one could create an agent and transform a task that required the control and the input of a person into something autonomous," said Tanase.
Customer use cases. "We see a lot of use cases around analytics. A lot of times people will make two or three copies of data. Enterprises are looking to run analytics on data and use AI to do that and coordinate large data sets from heterogeneous data sources," said Tanase.
On-premise and hybrid AI evolution. Tanase said most enterprises started with AI in the cloud because GPUs-as-a-service doesn't require a massive initial investment. What's happening now is enterprises are looking to put AI infrastructure closer to the data. "If the data source is in the traditional data center or being brought from an edge device, enterprises are building AI infrastructure where the data is," said Tanase. "It's too expensive to correlate multiple data silos and move that to the cloud. Customers are sometimes better off building a data lake into their traditional enterprise and then deploying training and inference closer to the data."
Edge computing's importance. "I am a firm believer that there is more data being created at the edge and in the cloud than the data center," said Tanase. "AI will give the power to analyze the data as its created and perhaps enable customers to become more discerning about their data and understand what they need to keep and protect. I'm hoping many of these capabilities in the future are autonomous."
The product roadmap. "Going forward, AI is fundamentally changing everything. The market is moving fast and the standardization of MCP (model context protocol) and other protocols are enabling AI modules to talk to each other," said Tanase. "In order to be relevant in this market, you have to act with agility in a way many companies have not experienced before. Time to market, constant innovation and the reality that no one vendor can do it all are critical."
Tanase added that customers will see integrated systems from Hitachi Vantara and a wide range of natural partners including Nvidia, Cisco, Supermicro, Hammerspace and Commvault to name a few.
Sovereign AI. Tanase said sovereign AI infrastructure is becoming a big market. "AI has become a matter of national security for many countries or provinces, and there is a lot of need to integrate and build sovereign AI," said Tanase. "We live in a very polarized world, and I can see states, governments, and provinces building sovereign AI infrastructure. It's a part of what everybody does in order to compete in the 21st century."
Final word. Tanase said Hitachi Vantara has earned the right to play in the AI space and is being used for critical business applications leveraging structured and unstructured data via VSP 360 and a wide range of systems. "We know our customers. We want to save them time. We invest a lot of tools to enable automation, and we believe that's critical, because people want repeatable results," he said. "Automation is top of mind. We're a leader in infrastructure sustainability and sustainable products save customers money in terms of floor in the data center, cooling power and overall cost of ownership."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
SAP said it has launched its sovereign cloud offerings on AWS European Sovereign Cloud and inked a deal that brings OpenAI to Germany's public sector customers.
The partnerships revolving around sovereign cloud in Europe include multiple clouds. With AWS, SAP Sovereign Cloud apps with security and regulatory compliance will run on AWS European Sovereign Cloud, a new independent cloud. AWS has said it will invest €7.8 billion in the EU.
As for the OpenAI deal, SAP said OpenAI for Germany will be available on SAP's Delos Cloud, which runs on Microsoft Azure. OpenAI and SAP will combine large language models with SAP apps for the public sector.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Microsoft said it will add Anthropic's Claude Sonnet 4 and Claude Opus 4.1 to Microsoft Copilot Studio and Researcher in a move that highlights how the company is diversifying from OpenAI.
Until recently, Anthropic was more of a AWS and Google Cloud play with Microsoft serving as the venue for OpenAI models.
In a blog post, Microsoft said Anthropic models will be available in Microsoft 365 Copilot in the Research agent, which now can be powered by OpenAI or Anthropic. In addition, Claude Sonnet 4 and Claude Opus 4.1 models are available in Copilot Studio.
With the Copilot Studio addition, enterprises will be able to create AI agents using Anthropic models.
OpenAI and Microsoft have been putting a little distance between themselves. For instance, OpenAI launched open weight models that can be used beyond Microsoft Azure. The two companies have also come to an understanding about Microsoft's equity stake and OpenAI's structure.
And OpenAI is also building its own AI infrastructure via a partnership with Oracle.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Qualcomm laid out a vision where AI workloads are hybrid between the cloud and edge devices such as smart glasses, smartphones and wearable devices. The conduit for these AI workloads will ultimately be 6G.
At the Snapdragon Summit, Qualcomm CEO Cristiano Amon said AI will remake every device you wear, come complete with AI agents and be proactive with you in real time.
For Amon, the Qualcomm vision was partially about talking up Snapdragon and outlining its role going forward. In the big picture, Qualcomm's take on edge devices handling a big chunk of the AI workload is notable. Why? The current AI thinking revolves around brute force compute, billions if not trillions of dollars spent on AI data centers and cloud delivery.
"We envision AI to be both cloud and edge. The edge complements the cloud. It’s immediate. It’s personal. It has context. And there’s one important thing about the edge: that’s where the data originates and then where AI becomes yours," said Amon.
Amon argued there will be a new compute architecture for AI that is cloud and edge. He also noted that foundational models are already designed to be in the cloud and edge and will ultimately be the UI.
In addition, the data collected at the edge will be more critical than what was used to train models. This edge AI data will be what personalizes the experience, acts on your behalf and has context.
Amon was talking about personal computing experiences, but it isn't much of a leap to understand how sensors at the edge are going to impact enterprise use cases too. "The importance of edge data is massive. It’s the best kept secret," said Amon.
The conduit for these cloud to edge hybrid AI workloads will be the network in between. Amon touted 6G networks and pre-commercial devices ready as early as 2028.
"6G is designed to be the connection between the cloud and the edge devices. The difference between 5G and 6G is the network of intelligence connecting the edge and the cloud, merging the physical and the digital, providing connected experiences,” said Amon.
A big part of the AI at the edge strategy revolves around Snapdragon and the chips on devices.
Snapdragon 8 Elite Gen 5 will enable fast multitasking and app switching as well as long-game play.
The platform enables AI agents to work across apps via continuous on-device learning and real time sensing and multi-modal models.
Snapdragon 8 Elite Gen 5 has performance gains across the Oryon CPU, a 20% performance boost, Qualcomm Adreno GPU with a 23% boost for gaming, and Qualcomm Hexagon NPU with 37% faster performance.
Snapdragon X2 Elite Extreme and Snapdragon X2 Elite
Qualcomm rolled out new processors for Windows 11 PCs and can deliver 80 TOPS of AI processing. It's also the first Arm processor to run at 5 GHz.
Snapdragon X2 Elite Extreme is designed for premium PCs and is aimed at agentic AI, data analytics and professional media editing.
Devices powered by the Snapdragon X2 Elite family will be available in the first half of 2026.
The launch of Qualcomm’s next-gen processors indicates a move upmarket to target creative pros and data engineers.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
OpenAI, Oracle and Softbank have announced 5 new data centers under the Stargate project good for 7 gigawatts of capacity. The data centers will be largely powered by Nvidia-based infrastructure.
The companies said the flagship site in Abilene, Texas is operational and can deliver 5.5 gigawatts of capacity built on Oracle Cloud Infrastructure (OCI) and Nvidia's stack. Combined with projects with CoreWeave, Stargate now has 7 gigawatts of planned capacity and more than $400 billion in investment over the next three years.
According to OpenAI, Stargate, announced in January, is well on its way toward securing $500 million and 10-gigawatts committed by the end of 2025.
In a nutshell, consumer AI is more like covering a sports story. OpenAI pledges $300 billion to buy AI infrastructure from Oracle. Oracle buys GPUs. Nvidia backstops CoreWeave with purchase guarantees if it has extra capacity. OpenAI also appears to be a big future buyer of Broadcom XPUs (which probably led to the Nvidia deal). Meanwhile, Amazon, Google and AWS all have to buy Nvidia but building their own custom chips for AI workloads.
The Stargate project is the big reason why Oracle's remaining performance obligation surged in its latest quarter. OpenAI and Oracle partnered on up to 4.5 gigawatts of additional Stargate capacity.
Over the next five years, Oracle and OpenAI will develop three sites in Shackelford County, Texas; Doña Ana County, New Mexico; and a site in the Midwest.
In addition to the flagship Stargate site in Abilene, two separate sites will be developed over the next 18 months. Stargate sites in Lordstown, Ohio and another in Milam County, Texas can scale to 1.5 gigawatts of capacity. "The AI race is on and the unit of measure is gigawatts going live. At launch Stargate was questioned as a viable partnership, but it's clear now that they are at the forefront of getting AI capacity online," said Constellation Research analyst Holger Mueller.
According to a report by Bain & Co., $2 trillion in annual global revenue is needed to fund the computing power needed to meet anticipated AI demand by 2030. Bain estimated that even with anticipated AI savings, the world is $800 billion in short to keep pace with demand.
In an earlier blog post, OpenAI CEO Sam Altman said the time to build is now. He said:
"Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week. The execution of this will be extremely difficult; it will take us years to get to this milestone and it will require innovation at every level of the stack, from chips to power to building to robotics. But we have been hard at work on this and believe it is possible."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Enterprise software vendors' consumption-based pricing is creating uncertainty and concern among CxOs trying to get a grip on budgets.
In the September BT150 CxO meetup, IT decision-makers said consumption-based models from software providers have come with price increases that impacted adoption. Snowflake and Domo were cited as vendors that recently raised prices. In a February meeting, CxOs were more constructive about consumption-based models.
Here's a recap of our CxO call, which operates under Chatham House rules.
One CxO noted that the biggest concern with consumption pricing--often couched in terms like flex credits--was "the lack of predictability and impact on adoption." CxOs agreed that when they can't accurately forecast technology costs there's a reticence to adopt tools like AI agents.
BT150 members also noted that the unpredictable nature of consumption-based pricing was also straining vendor relationships. "There needs to be a balance between value-based pricing and predictability for long-term partnerships." Vendors want to capture value from usage, but customers also require cost certainty.
According to CxOs in the BT150, the pricing challenges are leading to a few strategies:
Be more cautious about platform selection and usage patterns. CxOs noted that they have no desire to leverage multiple platforms.
Seek alternatives with more predictable cost structures.
Search for value in discussions with vendors where the focus is on more predictable pricing for the two or three areas that are really driving value for the enterprise. Once usage on these high-value areas settles, costs should be more predictable.
The pain with consumption based pricing is more acute for mid-tier customers that are used to having more fixed cost models. Larger enterprises tend to be more familiar with consumption pricing and have more visibility into what the company will consume.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Snowflake, Salesforce, BlackRock and other enterprise vendors have launched the Open Semantic Interchange initiative, which aims to standardize data and ensure semantic metadata interoperability.
The move comes as standards have been flying in 2025 as vendors create frameworks for agentic AI. Model Context Protocol (MCP) from Anthropic is the best known, but Google Cloud launched Agent2Agent (A2A) and last week outlined AP2, a standard for agentic commerce.
While AI obviously needs protocols to connect AI agents, fragmented data definitions are also a big problem for interoperability. Snowflake said Open Semantic interchange (OSI) is an open source initiative that wants to create a vendor-neutral specification for how semantic metadata is defined and shared.
Christian Kleinerman, executive vice president of product at Snowflake, said OSI is looking "to solve a foundational challenge for AI — the lack of a common semantic standard."
OSI is looking to address semantic data for businesses, domains and industries as a framework to make metadata interoperable.
The goals of OSI include enhancing interoperability across multiple AI, business intelligence and analytics tools, accelerating adoption of AI app and streamlining operations.
Initial partners include Alation, Atlan, BlackRock, Blue Yonder, Cube, dbt Labs, Elementum AI, Hex, Honeydew, Mistral AI, Omni, RelationalAI, Salesforce, Select Star, Sigma, Snowflake and ThoughtSpot.
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Nvidia and OpenAI said they have struck a partnership where OpenAI will deploy at least 10 gigawatts of AI datacenters built on Nvidia's Vera Rubin GPUs. Nvidia will invest up to $100 billion in OpenAI as each gigawatt is deployed.
In other words, Nvidia is investing in $10 billion a gigawatt deployed. OpenAI will also be the first public reference for Nvidia's Vera Rubin platform when the first phase of the deal comes online in the second half of 2026.
In a press release, the two companies said they have inked a letter of intent for the partnership. OpenAI, which has noted it is chasing superintelligence, will use Nvidia to power the training of its next-gen ChatGPT models.
Nvidia CEO Jensen Huang said the two companies have "pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT."
OpenAI CEO Sam Altman said that "everything starts with compute."
Key points of the partnership include:
OpenAI will use Nvidia as its preferred compute and networking partner for AI factories.
OpenAI and Nvidia will optimize roadmaps for models, infrastructure and software.
The partnership will include a bevy of collaborators.
The companies said they will finalize details of the partnership in the weeks ahead.
Constellation Research analyst Holger Mueller said:
"Good to see OpenAI will build its own data centers, and good to see plans for Nvidia's next platform, Vera Rubin. Someone please add up all the purchase commitments OpenAI has done for Oracle, Microsoft and now own data centers and see if Sam Altman might have to sell his shirt in 2026."
Editor in Chief of Constellation Insights
Constellation Research
Larry Dignan is Editor in Chief of Constellation Insights at Constellation Research, where he leads editorial coverage focused on enterprise technology, digital transformation, and emerging trends shaping the future of business. He oversees research-driven news, analysis, interviews, and event coverage designed to help technology buyers and vendors navigate complex markets with clarity and context. ...
Under the terms of the deal, Thoma Bravo will pay $23.25 per share in cash, nearly a 42% premium over PROS closing price Sept. 19.
PROS said it will continue to focus on advancing its agentic AI product plans with the capital and expertise of Thoma Bravo. PROS CEO Jeff Cotten said that the company will "be more agile and have greater flexibility to invest in innovation and expand our platform" as a private company.
When PROS reported its second quarter earnings in July, it projected 2025 revenue of $360 million to $362 million with free cash flow of $40 million to $44 million.
Depending on the specific product, PROS competes with Conga, Oracle CPQ, Zilliant, Salesforce Revenue Cloud and others. Conga happens to be a Thoma Bravo portfolio company.