Results

GenAI projects may be sucked up into the transformation, digital core vortex

GenAI projects may be sucked up into the transformation, digital core vortex

Generative AI projects are gaining steam in the enterprise, but there's a big hurry up and wait vibe to them. Why? Enterprises operate on a continuum and don't have their ERP, cloud and data transformations complete.

This genAI project progression was outlined by Accenture CEO Julie Sweet on the company's third quarter earnings call. Accenture is seeing genAI momentum and has $2 billion in generative AI bookings over the last 9 months but customers without a strong "digital core" are still on the tarmac.

Sweet said:

"It is important to remember that while there is a near universal recognition now of the importance of AI, which is at the heart of reinvention, the ability to use genAI at scale varies widely with clients on a continuum.

With those which have strong digital cores genuinely seeking to move more quickly, while most clients are coming to the realization of the investments needed to truly implement AI across the enterprise, starting with a strong digital core from migrating applications and data to the cloud, building a new cognitive layer, implementing modern ERP and applications across the enterprise to a strong security layer."

That take isn't news, but does make me wonder if genAI projects may not really hit production and scale for a few years. After all, SAP trumpeted that Bain moved completely to S/4HANA Public Cloud in a project that took four years.

Sweet continued:

"Nearly all clients are finding it difficult to scale genAI projects because the AI technology is a small part of what is needed. To reinvent using technology, data, and AI, you must also change your processes and ways of working, rescale and upscale your people, and build new capabilities around responsible AI, all with a deep understanding of industry, function, and technology to unlock the value. And many clients need to first find more efficiencies to enable scaled investment in their digital cores and all these capabilities, particularly in data foundations.

In short, genAI is acting as a catalyst for companies to more aggressively go after cost, build the digital core, and truly change the ways they work."

Sweet cited a bevy of customers including Macy's, which is migrating mainframes to the cloud; Central Bank of the United Arab Emirates, which is modernizing its enterprise data management; and Independence Health Group, which is moving to a digital first platform.

Now companies that have already made those transformational moves are set up for genAI. "Once clients have a strong foundation, they can explore new opportunities to drive growth and efficiencies with genAI," said Sweet.

Here's the catch: There are a lot more enterprises that need to do the hard work before genAI projects can scale than ones with strong digital cores.

Overall, these prerequisites for genAI impact means more work for Accenture, but it does explain why enterprise software vendors are seeing little uptick from genAI.

More on genAI dynamics:

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth New C-Suite Tech Optimization Digital Safety, Privacy & Cybersecurity accenture AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Anthropic launches Claude 3.5 Sonnet, Artifacts as a way to collaborate

Anthropic launches Claude 3.5 Sonnet, Artifacts as a way to collaborate

Anthropic launched Claude 3.5 Sonnet, its latest large language model (LLM), with availability on Anthropic API, Amazon Bedrock and Google Cloud Vertex AI.

According to Anthropic, Claude 3.5 Sonnet outperforms OpenAI's ChatGPT-4o on multiple metrics with improved price/performance rations.

Claude 3.5 Sonnet will cost $3 per million input tokens and $15 per million output tokens with a 200K token context window. Anthropic said it will also be updating Claude Opus. Here's a look at the benchmarks.

In addition, Anthropic launched Artifacts on Claude.ai. Artifacts is a feature that creates a workspace on the side so a user can collaborate more with the model. Anthropic said:

"This preview feature marks Claude’s evolution from a conversational AI to a collaborative work environment. It’s just the beginning of a broader vision for Claude.ai, which will soon expand to support team collaboration. In the near future, teams—and eventually entire organizations—will be able to securely centralize their knowledge, documents, and ongoing work in one shared space, with Claude serving as an on-demand teammate."

Anthropic's Artifacts preview is a spin on a future of work where AI-based teammates work side-by-side with humans.

More on LLMs:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Target launches Store Companion, genAI app for employees

Target launches Store Companion, genAI app for employees

Target said it will launch Store Companion, a generative AI chatbot designed to help employees boost customer experiences, across its 2,000 stores by August.

The retailer didn't reveal the vendors involved with Store Companion other than it said it designed the chatbot. Target was a reference customer at Google Cloud Next. Store Companion was trained on frequently asked question and process documents from store teams.

Store Companion is built to answer questions workers get on the job, offer coaching and support management. Target said it will continue to test and launch more genAI applications throughout 2024.

In a statement, Target CIO Brett Craig said "genAI is helping us accelerate the rate of innovation across our operations."

Walmart, Target highlight intersection of supply chain, customer experience 

Store Companion will be available as an app on employees handheld devices to answer questions about processes and procedures. Employees can ask multiple questions from credit card applications to restarting cash registers and procedures in a power outage. Target added that Store Companion will also enable seasonal workers to ramp up quickly.

Other details about Store Companion include:

  • Target is piloting Store Companion in about 400 stores.
  • The rollout took six months.
  • The model was tweaked based on employee feedback and experiences.

Target said on its first quarter earnings call that it was using generative AI to drive digital experiences and enable personalization.

Speaking on the earnings call, Christina Hennington, Chief Growth Officer at Target, outlined the genAI efforts, which include guided search, product display enhancements and personalization. She said:

"Our team's work to remodel our digital platforms is paying multiple dividends. Newly developed generative AI and personalization capabilities are expanding scope and reach of what we can offer our guests in terms of product recommendations, search results and more.

We recently engaged in a pilot with one of our biggest vendors to test our latest personalization capabilities with guests shopping our personal care categories. We're very encouraged by early test results, which showed a nearly three times lift in conversion rates from personalized promotions versus mass offers, including higher sales lift across the rest of the category as well.

In addition to driving more personalization, we're also focused on growing relevance, particularly where there may be opportunities in our current online assortment."

More on genAI dynamics:

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity B2B B2C CX Customer Experience EX Employee Experience business Marketing eCommerce Supply Chain Growth Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Customer Service Content Management Collaboration M&A Enterprise Service AI Analytics Automation Machine Learning Generative AI ML LLMs Agentic AI SaaS PaaS IaaS Healthcare GenerativeAI Chief Information Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Growth Officer Chief Marketing Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer Chief Supply Chain Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer

Safe Superintelligence Inc. launches: Here's what it means

Safe Superintelligence Inc. launches: Here's what it means

Three well-known generative AI pioneers have formed Safe Superintelligence Inc., a startup that will focus on safe superintelligence (SSI).

In a post, former OpenAI leaders Ilya Sutskever and Daniel Levy and Daniel Gross, a former Y Combinator partner, announced the company's role and mission. Sutskever was OpenAI's chief scientist and Levy was an OpenAI engineer.

Here's the Safe Superintelligence Inc. mission in a nutshell. The three founders wrote:

"SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Constellation Research analyst Chirag Mehta broke down the Safe Superintelligence launch means and the open questions.

  • What does safe superintelligence mean exactly? "We at least know what AGI means, but no one can meaningfully describe what 'Safe Superintelligence' actually means," said Mehta.
  • The company may actually distraction from safe AI. "This launch might likely have the opposite effect--a distraction from focusing on making AI systems safe today before we cross the AGI or superintelligence Rubicon.
  • Researchers unite! "This effort will likely attract many researchers and technologists who have been passionate about advancing the domain but are frustrated with limitations and changing strategies of current AI companies," said Mehta.
  • Future direction of Safe Superintelligence. "It is unclear in which direction the company goes. I will be surprised if they themselves are clear about their milestones," said Mehta. "It would be worth watching who they hire, who they raise money from, and who they might work with as their design partners. That would reveal more details beyond a lofty mission statement."
  • Lines are drawn. "This will likely drive a deeper wedge into the OpenAI-Sam Altman and Stability AI networks as many of them considered this to be the original mission of OpenAI. As M.G. cleverly put it, “I'm reminded of Coca-Cola Classic. Safe Superintelligence sounds a lot like 'OpenAI Original'," said Mehta.
  • Now hiring for the movement. "The larger enterprise software community will largely ignore this launch, but for serious AI aficionados it would be a dream to be part of a movement in Palo Alto or Tel Aviv—two magnificent cities that have largely defined the next generation landscape and are on a way to define the next one," said Mehta.
Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Dell Technologies, Supermicro building xAI supercomputer

Dell Technologies, Supermicro building xAI supercomputer

Dell Technologies and Supermicro are building an AI factory with Nvidia for Elon Musk's xAI.

The buildout, announced in a post by Dell Technologies CEO Michael Dell, will power Grok, xAI's large language model.

Musk confirmed the deal in a post on X, but did note that Dell is assembling half the racks for the xAI supercomputer. He later added in a reply that Supermicro is doing the other half.

The xAI data center buildout highlights how infrastructure for generative AI has been a boom market and innovation hub. The profits, however, haven't trickled down to enterprise software vendors yet.

Dell Technologies outlined its AI factory strategy at Dell Technologies World. One part of the Dell strategy revolves around tight integration with Nvidia. The other half of that strategy will include AMD and other AI infrastructure players.

For Supermicro, the xAI deal will be a big win and also represents a close relationship with Nvidia. Supermicro built the first supercomputer for Nvidia a decade ago to work on AI. Supermicro CFO David Weigand said at a recent investment conference that the only thing holding the company's growth back has been supply. Supermicro's third quarter revenue was $3.85 billion, up 200% from a year ago.

Based on Supermicro's fourth quarter revenue outlook of $5.1 billion to $5.5 billion, the company is north of a $20 billion annual revenue run rate.

"The only thing that has restrained us to date is supply. There's no question that we would be further ahead in the numbers because that's why what's caused our backlog to grow, we'd be further ahead if we had more supply," said Weigand.

He added that the competition for AI infrastructure is only going to heat up. This week, HPE announced a broad partnership with Nvidia.

Weigand said:

"Everyone is running and rushing to the party. This is nothing new to us. It's really a lot of the same players out there. With the number of employees that we have, we're half engineers. We're very focused on what we do. We're not trying to be all things to all people. We're trying to build the very best customized servers and for some of the best companies in the world."

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity dell ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Big Data GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS re:Inforce, AI at ServiceNow, SAP Innovation | ConstellationTV Episode 82

AWS re:Inforce, AI at ServiceNow, SAP Innovation | ConstellationTV Episode 82

This week on ConstellationTV episode 82, hear co-hosts Liz Miller and Holger Mueller analyze the latest enterprise #technology news and events (Sales Cloud & GROW from SAP Sapphire, #CX at Pegaworld, #security).

Then watch an interview between R "Ray" Wang and ServiceNow CSO Nick Tzitzon on the latest advancements, efficiencies, and opportunities from the platform company, and learn Holger's top five takeaways from Amazon Web Services (AWS) re:Inforce 2024.

0:00 - Introduction: Meet the Hosts
1:42 - Enterprise #technology news coverage
14:16 - #AI advancements and #innovation from ServiceNow
24:14 - AWS re:inforce 2024 analysis
30:08 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/iGjzC49Ji4Q?si=0Cxe1XbCZH8KLMtv" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

SurrealDB raises $20 million in VC funding

SurrealDB raises $20 million in VC funding

SurrealDB raised $20 million in venture capital to bring its total to $26 million. The bet: Multi-model databases will be critical to enterprises looking to consolidate multiple databases so developers can move faster.

The financing round was led by FirstMark and Georgian. With AI workloads and multiple data silos, SurrealDB is looking to address developer pain points. The multi-model database also is completely written in the Rust programming language.

Holger Mueller, Constellation Research analyst, noted that SurrealDB is part of a band of next-generation databases that look to underpin modern applications.

Mueller said:

"The next generation applications of the 2020s are multi-model and challenging to create. At the same time developer capacity is restricted and top database developers command top dollar. Making it easier for enterprises to build these apps is what a multi-model database can offer--a single place where applications can tap documents, columnar, analytical and transactional data. Congrats to SurrealDB, which has a modern foundation being built on the language of the decade, Rust. Rust will give the offering extra heft with developers."

Key points about SurrealDB:

  • SurrealDB has advanced security and access permissions.
  • The database includes indexing for AI workflows, machine learning inference and model processing.
  • The company also announced the beta launch of Surreal Cloud.
  • SurrealDB also has a management application called Surrealist.
  • The company is part of multiple open-source projects.

As for competition, SurrealDB plays in a market that includes MarkLogic, ArangoDB, OrientDB, Azure Cosmos DB, FoundationDB, Couchbase, and Apache Ignite among others.

Among those competitors, Couchbase is publicly traded. It has revenue of about $50 million a quarter and exited the first quarter with annual recurring revenue of $207.7 million. MarkLogic was acquired by Progress in 2023.

Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

HPE unveils Private Cloud AI, broad Nvidia partnership aimed at genAI workloads

HPE unveils Private Cloud AI, broad Nvidia partnership aimed at genAI workloads

Hewlett Packard Enterprise and Nvidia teamed up to launch a set of private cloud offerings and integrations designed for generative AI workloads. Nvidia AI Computing by HPE will be available in the fall.

With the move, announced at HPE Discover 2024 in Las Vegas, HPE enters a broad portfolio into the AI computing race. Enterprises are building out on-premises infrastructure and buying AI-optimized servers in addition to using cloud computing.

HPE's partnership comes a few weeks after Dell Technologies launched a broad AI factory partnership with Nvidia. HPE is planning to leverage its channel, integration points with HPE Greenlake, high-performance computing portfolio and cooling expertise to woo enterprises.

The main attraction at HPE Discover 2024 is HPE Private Cloud AI, which deeply integrates Nvidia's accelerators, computing, networking and software with HPE AI storage, servers and Greenlake. HPE Private Cloud AI will also include an OpsRamp AI copilot that will help manage workloads and efficiency.

AI infrastructure is the new innovation hotbed with smartphone-like release cadence | GPUs, Arm instances account for larger portion of cloud costs, says Datadog

According to HPE, HPE Private Cloud AI will include a self-service cloud experience and four configurations to support workloads and use cases. HPE also said that Nvidia AI Computing by HPE offerings and services will also be offered by Deloitte, HCL Tech, Infosys, TCS and Wipro.

Antonio Neri, CEO of HPE, said during his keynote that enterprises need more turnkey options for AI workloads. He was joined by Nvidia CEO Jensen Huang. At Computex, Nvidia said it will move to an annual cycle of GPUs and accelerators along with a bunch of other AI-optimized hardware. Neri said that HPE has been at the leading edge of innovation and supercomputing and will leverage that expertise into AI. "Our innovation will lead to new breakthroughs in edge to cloud," said Neri. "Now it leads to AI and catapult the enterprise of today and tomorrow."

"AI is hard and it is complicated. It is tempting to rush into AI, but innovation at any cost is dangerous," said Neri, who added that HPE's architecture will be more secure, feature guardrails and offer turnkey solutions. "We are proud of our supercomputing leadership. It's what positions us to lead in the generative AI future."

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"HPE is working hard fighting for market share for on-premises AI computing. It's all about AI and in 2024 and that means partnering with Nvidia. Co-developing HPE Private Cloud AI as a turnkey and full stack is an attractive offering for CXOs, as it takes the integration burden off of their teams and lets them focus on what matters most for their enterprise, which is building AI powered next-gen apps."

Constellation Research analyst Andy Thurai said HPE can gain traction in generative AI systems due to integration. 

Thurai said:

"What HPE offers is an equivalent of 'AI in a box.' It will offer the combination of hardware, software, network, storage, GPUs and anything else to run efficient AI solutions. For enterprises, it's efficient to already know the solutions, price points and optimization points. Today, most enterprises that I know are in an AI experimentation mode. Traction may not be that great initially."

HPE bets on go-to-market, simplicity, liquid cooling expertise

HPE's latest financial results topped estimates and Neri said enterprises are buying AI systems. HPE's plan is to differentiate with systems like liquid cooling, one of three ways to cool systems. HPE also has traction with enterprise accounts and saw AI system revenue surge accordingly. Neri said the company's cooling systems will be a differentiator as Nvidia Blackwell systems gain traction.

Nvidia's Huang agreed on the liquid cooling point. "Nobody has plumbed more liquid than Antonio," quipped Huang. 

Here's what HPE Private Cloud AI includes:

  • Support for inference, fine-tuning and RAG workloads using proprietary data.
  • Controls data privacy, security and governance. A cloud experience that includes ITOps and AIOps tools powered by Greenlake and OpsRamp, which provides observability for the stack including Nvidia InfiniBand and Spectrum Ethernet switches.
  • OpsRamp integration with CrowdStrike APIs.
  • Flexible consumption models.
  • Nvidia AI Enterprise software including Nvidia NIM microservices.
  • HPE AI Essentials software including foundation models and a variety of services for data and model compliance.
  • Integration that includes Nvidia Spectrum-X Ethernet networking, HPE GreenLake for File Storage, and HPE ProLiant servers with support for Nvidia L40S, H100 NVL Tensor Core GPUs and the Nvidia GH200 NVL2 platform.

The tie-up with Nvidia and HPE went beyond the private cloud effort. HPE said it will support Nvidia's latest GPUs, CPUs and Superchip across its Cray high-performance computing portfolio as well as ProLiant servers. The support includes current Nvidia GPUs as well as support for the roadmap going forward including Blackwell, Rubin and Vera architectures.

HPE also said GreenLake for File Storage now has Nvidia DGX BasePod certification and OVX storage validations.

Other news at HPE Discover 2024:

  • HPE is adding HPE Virtualization tools throughout its private cloud offerings. HPE Virtualization includes open source kernel-based virtual machine (KVM) with HPE's cluster orchestration software. HPE Virtualization is in preview with a release in the second half.
  • HPE Private Cloud will have native integration with HPE Alletra Storage MP for software defined storage as well as OpsRamp and Zerto for cyber resiliency.
  • HPE and Danfoss said they will collaborate on modular data center designs that deploy heat capture systems for external reuse. HP Labs will also have a series of demos on AI sustainability.
Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity HPE Big Data SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Top 5 Takeaways from IBM Think 2024 with Andy Thurai

Top 5 Takeaways from IBM Think 2024 with Andy Thurai

 

Hear from Constellation analyst Andy ThuraI on his top 5 takeaways💡 from IBM THINK 2024:

? The future of #AI is open.
? #GenerativeAI model transparency
? Consulting advantage
? Platform advantage
? IBM Concert

Watch the full #analysis below ?

On <iframe width="560" height="315" src="https://www.youtube.com/embed/-VeHrIQ1b_U?si=IB5iiKpfiskTHt-D" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

OpenAI and Microsoft: Symbiotic or future frenemies?

OpenAI and Microsoft: Symbiotic or future frenemies?

OpenAI has built momentum by closing a big partnership with Apple, a channel deal with PwC and a series of enterprise wins. These events put an exclamation point on the enterprise traction that OpenAI is seeing directly and raise a big question: Will OpenAI eventually compete with its primary investor Microsoft?

Let's start with the big stuff. Apple's WWDC keynote outlined the company's generative AI strategy, which melds on-device processing, private cloud and a partnership with OpenAI. OpenAI will be a big part of iPhone queries that need to go to the cloud even though Bloomberg reported there is no money being exchanged. In other words, OpenAI is like an NFL Super Bowl halftime performer—it’s all about the exposure, marketing and distribution. Rest assured, that Apple has its own large language models (LLMs) to ensure it is closest to the customer experience, but OpenAI is in the mix.

That Apple partnership, however, only highlighted other recent data points. Consider:

The big takeaway from these deals is that enterprises are going direct to OpenAI. Plenty of enterprises are exposed to OpenAI via Microsoft. The Microsoft-OpenAI partnership has made OpenAI the biggest ingredient brand since Intel.

It's clear that OpenAI doesn't intend to be just an ingredient brand. Yes, Microsoft is a huge OpenAI investor, but the latter has bigger ambitions and a looming IPO at some point. Naming Sarah Friar CFO and Kevin Weil chief product officer only drives home that OpenAI is building out its management team ahead of an IPO.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

What's next? Okta CEO Todd McKinnon said on CNBC something that a few observers have been wondering. McKinnon noted that Microsoft is effectively outsourcing its best AI R&D to OpenAI. Microsoft could become more like a consultancy than an innovator. I'm not sure that's exactly fair given Microsoft has been rolling out its own models and more choice, but McKinnon's perception isn't that surprising.

After all, we at Constellation Research have been debating this topic. Microsoft went with OpenAI to be first to market and the bet went swimmingly. The long run may look different for both sides.

My bet: OpenAI will increasingly compete with Microsoft to some degree, but the software and cloud giant will benefit either way since it is an investor. Over time, OpenAI and Microsoft will more resemble frenemies. The partnership will be a great business school case study a few decades from now. The frenemy outcome looks even more likely when you consider that regulators are sniffing around OpenAI and Microsoft. Looking like competitors could suit both companies in the near term.

Ray Wang, CEO of Constellation Research, said:

"For OpenAI to be taken seriously, Microsoft must let it partner with the entire ecosystem or face threats of anti-trust. The symbiotic relationship today was born out of Microsoft's desire to catch up and leap ahead in AI. But going forward, Microsoft is making investments to build its own capabilities. It would behoove Sam Altman to just partner with Microsoft. For AI to succeed, the approach Meta is taking will ultimately win - open source, open, and part of a larger ecosystem for data collectives."

Barry Briggs, analyst with Directions on Microsoft and former CTO of Microsoft's IT org, said:

"Tiny OpenAI has not one but three tigers by the tail, managing multibillion dollar relationships with not only Microsoft but Apple and Oracle as well. With growing demands from each of these mega players, OpenAI will, over time, be forced to navigate its own course among them – which may result in its “special relationship” with Microsoft becoming more distant. Microsoft in turn, hardly a wallflower in AI, has not only created its own language models but has already started partnering with other firms, Mistral being an example. Symbiotic? Maybe. Exclusive? Hardly."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity openai apple ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer