Results

Dell Technologies, Supermicro building xAI supercomputer

Dell Technologies and Supermicro are building an AI factory with Nvidia for Elon Musk's xAI.

The buildout, announced in a post by Dell Technologies CEO Michael Dell, will power Grok, xAI's large language model.

Musk confirmed the deal in a post on X, but did note that Dell is assembling half the racks for the xAI supercomputer. He later added in a reply that Supermicro is doing the other half.

The xAI data center buildout highlights how infrastructure for generative AI has been a boom market and innovation hub. The profits, however, haven't trickled down to enterprise software vendors yet.

Dell Technologies outlined its AI factory strategy at Dell Technologies World. One part of the Dell strategy revolves around tight integration with Nvidia. The other half of that strategy will include AMD and other AI infrastructure players.

For Supermicro, the xAI deal will be a big win and also represents a close relationship with Nvidia. Supermicro built the first supercomputer for Nvidia a decade ago to work on AI. Supermicro CFO David Weigand said at a recent investment conference that the only thing holding the company's growth back has been supply. Supermicro's third quarter revenue was $3.85 billion, up 200% from a year ago.

Based on Supermicro's fourth quarter revenue outlook of $5.1 billion to $5.5 billion, the company is north of a $20 billion annual revenue run rate.

"The only thing that has restrained us to date is supply. There's no question that we would be further ahead in the numbers because that's why what's caused our backlog to grow, we'd be further ahead if we had more supply," said Weigand.

He added that the competition for AI infrastructure is only going to heat up. This week, HPE announced a broad partnership with Nvidia.

Weigand said:

"Everyone is running and rushing to the party. This is nothing new to us. It's really a lot of the same players out there. With the number of employees that we have, we're half engineers. We're very focused on what we do. We're not trying to be all things to all people. We're trying to build the very best customized servers and for some of the best companies in the world."

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity dell ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Big Data GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS re:Inforce, AI at ServiceNow, SAP Innovation | ConstellationTV Episode 82

This week on ConstellationTV episode 82, hear co-hosts Liz Miller and Holger Mueller analyze the latest enterprise #technology news and events (Sales Cloud & GROW from SAP Sapphire, #CX at Pegaworld, #security).

Then watch an interview between R "Ray" Wang and ServiceNow CSO Nick Tzitzon on the latest advancements, efficiencies, and opportunities from the platform company, and learn Holger's top five takeaways from Amazon Web Services (AWS) re:Inforce 2024.

0:00 - Introduction: Meet the Hosts
1:42 - Enterprise #technology news coverage
14:16 - #AI advancements and #innovation from ServiceNow
24:14 - AWS re:inforce 2024 analysis
30:08 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/iGjzC49Ji4Q?si=0Cxe1XbCZH8KLMtv" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

SurrealDB raises $20 million in VC funding

SurrealDB raised $20 million in venture capital to bring its total to $26 million. The bet: Multi-model databases will be critical to enterprises looking to consolidate multiple databases so developers can move faster.

The financing round was led by FirstMark and Georgian. With AI workloads and multiple data silos, SurrealDB is looking to address developer pain points. The multi-model database also is completely written in the Rust programming language.

Holger Mueller, Constellation Research analyst, noted that SurrealDB is part of a band of next-generation databases that look to underpin modern applications.

Mueller said:

"The next generation applications of the 2020s are multi-model and challenging to create. At the same time developer capacity is restricted and top database developers command top dollar. Making it easier for enterprises to build these apps is what a multi-model database can offer--a single place where applications can tap documents, columnar, analytical and transactional data. Congrats to SurrealDB, which has a modern foundation being built on the language of the decade, Rust. Rust will give the offering extra heft with developers."

Key points about SurrealDB:

  • SurrealDB has advanced security and access permissions.
  • The database includes indexing for AI workflows, machine learning inference and model processing.
  • The company also announced the beta launch of Surreal Cloud.
  • SurrealDB also has a management application called Surrealist.
  • The company is part of multiple open-source projects.

As for competition, SurrealDB plays in a market that includes MarkLogic, ArangoDB, OrientDB, Azure Cosmos DB, FoundationDB, Couchbase, and Apache Ignite among others.

Among those competitors, Couchbase is publicly traded. It has revenue of about $50 million a quarter and exited the first quarter with annual recurring revenue of $207.7 million. MarkLogic was acquired by Progress in 2023.

Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

HPE unveils Private Cloud AI, broad Nvidia partnership aimed at genAI workloads

Hewlett Packard Enterprise and Nvidia teamed up to launch a set of private cloud offerings and integrations designed for generative AI workloads. Nvidia AI Computing by HPE will be available in the fall.

With the move, announced at HPE Discover 2024 in Las Vegas, HPE enters a broad portfolio into the AI computing race. Enterprises are building out on-premises infrastructure and buying AI-optimized servers in addition to using cloud computing.

HPE's partnership comes a few weeks after Dell Technologies launched a broad AI factory partnership with Nvidia. HPE is planning to leverage its channel, integration points with HPE Greenlake, high-performance computing portfolio and cooling expertise to woo enterprises.

The main attraction at HPE Discover 2024 is HPE Private Cloud AI, which deeply integrates Nvidia's accelerators, computing, networking and software with HPE AI storage, servers and Greenlake. HPE Private Cloud AI will also include an OpsRamp AI copilot that will help manage workloads and efficiency.

AI infrastructure is the new innovation hotbed with smartphone-like release cadence | GPUs, Arm instances account for larger portion of cloud costs, says Datadog

According to HPE, HPE Private Cloud AI will include a self-service cloud experience and four configurations to support workloads and use cases. HPE also said that Nvidia AI Computing by HPE offerings and services will also be offered by Deloitte, HCL Tech, Infosys, TCS and Wipro.

Antonio Neri, CEO of HPE, said during his keynote that enterprises need more turnkey options for AI workloads. He was joined by Nvidia CEO Jensen Huang. At Computex, Nvidia said it will move to an annual cycle of GPUs and accelerators along with a bunch of other AI-optimized hardware. Neri said that HPE has been at the leading edge of innovation and supercomputing and will leverage that expertise into AI. "Our innovation will lead to new breakthroughs in edge to cloud," said Neri. "Now it leads to AI and catapult the enterprise of today and tomorrow."

"AI is hard and it is complicated. It is tempting to rush into AI, but innovation at any cost is dangerous," said Neri, who added that HPE's architecture will be more secure, feature guardrails and offer turnkey solutions. "We are proud of our supercomputing leadership. It's what positions us to lead in the generative AI future."

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"HPE is working hard fighting for market share for on-premises AI computing. It's all about AI and in 2024 and that means partnering with Nvidia. Co-developing HPE Private Cloud AI as a turnkey and full stack is an attractive offering for CXOs, as it takes the integration burden off of their teams and lets them focus on what matters most for their enterprise, which is building AI powered next-gen apps."

Constellation Research analyst Andy Thurai said HPE can gain traction in generative AI systems due to integration. 

Thurai said:

"What HPE offers is an equivalent of 'AI in a box.' It will offer the combination of hardware, software, network, storage, GPUs and anything else to run efficient AI solutions. For enterprises, it's efficient to already know the solutions, price points and optimization points. Today, most enterprises that I know are in an AI experimentation mode. Traction may not be that great initially."

HPE bets on go-to-market, simplicity, liquid cooling expertise

HPE's latest financial results topped estimates and Neri said enterprises are buying AI systems. HPE's plan is to differentiate with systems like liquid cooling, one of three ways to cool systems. HPE also has traction with enterprise accounts and saw AI system revenue surge accordingly. Neri said the company's cooling systems will be a differentiator as Nvidia Blackwell systems gain traction.

Nvidia's Huang agreed on the liquid cooling point. "Nobody has plumbed more liquid than Antonio," quipped Huang. 

Here's what HPE Private Cloud AI includes:

  • Support for inference, fine-tuning and RAG workloads using proprietary data.
  • Controls data privacy, security and governance. A cloud experience that includes ITOps and AIOps tools powered by Greenlake and OpsRamp, which provides observability for the stack including Nvidia InfiniBand and Spectrum Ethernet switches.
  • OpsRamp integration with CrowdStrike APIs.
  • Flexible consumption models.
  • Nvidia AI Enterprise software including Nvidia NIM microservices.
  • HPE AI Essentials software including foundation models and a variety of services for data and model compliance.
  • Integration that includes Nvidia Spectrum-X Ethernet networking, HPE GreenLake for File Storage, and HPE ProLiant servers with support for Nvidia L40S, H100 NVL Tensor Core GPUs and the Nvidia GH200 NVL2 platform.

The tie-up with Nvidia and HPE went beyond the private cloud effort. HPE said it will support Nvidia's latest GPUs, CPUs and Superchip across its Cray high-performance computing portfolio as well as ProLiant servers. The support includes current Nvidia GPUs as well as support for the roadmap going forward including Blackwell, Rubin and Vera architectures.

HPE also said GreenLake for File Storage now has Nvidia DGX BasePod certification and OVX storage validations.

Other news at HPE Discover 2024:

  • HPE is adding HPE Virtualization tools throughout its private cloud offerings. HPE Virtualization includes open source kernel-based virtual machine (KVM) with HPE's cluster orchestration software. HPE Virtualization is in preview with a release in the second half.
  • HPE Private Cloud will have native integration with HPE Alletra Storage MP for software defined storage as well as OpsRamp and Zerto for cyber resiliency.
  • HPE and Danfoss said they will collaborate on modular data center designs that deploy heat capture systems for external reuse. HP Labs will also have a series of demos on AI sustainability.
Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity HPE Big Data SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Top 5 Takeaways from IBM Think 2024 with Andy Thurai

 

Hear from Constellation analyst Andy ThuraI on his top 5 takeaways💡 from IBM THINK 2024:

⚡ The future of #AI is open.
⚡ #GenerativeAI model transparency
⚡ Consulting advantage
⚡ Platform advantage
⚡ IBM Concert

Watch the full #analysis below ⬇

On <iframe width="560" height="315" src="https://www.youtube.com/embed/-VeHrIQ1b_U?si=IB5iiKpfiskTHt-D" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

OpenAI and Microsoft: Symbiotic or future frenemies?

OpenAI has built momentum by closing a big partnership with Apple, a channel deal with PwC and a series of enterprise wins. These events put an exclamation point on the enterprise traction that OpenAI is seeing directly and raise a big question: Will OpenAI eventually compete with its primary investor Microsoft?

Let's start with the big stuff. Apple's WWDC keynote outlined the company's generative AI strategy, which melds on-device processing, private cloud and a partnership with OpenAI. OpenAI will be a big part of iPhone queries that need to go to the cloud even though Bloomberg reported there is no money being exchanged. In other words, OpenAI is like an NFL Super Bowl halftime performer—it’s all about the exposure, marketing and distribution. Rest assured, that Apple has its own large language models (LLMs) to ensure it is closest to the customer experience, but OpenAI is in the mix.

That Apple partnership, however, only highlighted other recent data points. Consider:

The big takeaway from these deals is that enterprises are going direct to OpenAI. Plenty of enterprises are exposed to OpenAI via Microsoft. The Microsoft-OpenAI partnership has made OpenAI the biggest ingredient brand since Intel.

It's clear that OpenAI doesn't intend to be just an ingredient brand. Yes, Microsoft is a huge OpenAI investor, but the latter has bigger ambitions and a looming IPO at some point. Naming Sarah Friar CFO and Kevin Weil chief product officer only drives home that OpenAI is building out its management team ahead of an IPO.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

What's next? Okta CEO Todd McKinnon said on CNBC something that a few observers have been wondering. McKinnon noted that Microsoft is effectively outsourcing its best AI R&D to OpenAI. Microsoft could become more like a consultancy than an innovator. I'm not sure that's exactly fair given Microsoft has been rolling out its own models and more choice, but McKinnon's perception isn't that surprising.

After all, we at Constellation Research have been debating this topic. Microsoft went with OpenAI to be first to market and the bet went swimmingly. The long run may look different for both sides.

My bet: OpenAI will increasingly compete with Microsoft to some degree, but the software and cloud giant will benefit either way since it is an investor. Over time, OpenAI and Microsoft will more resemble frenemies. The partnership will be a great business school case study a few decades from now. The frenemy outcome looks even more likely when you consider that regulators are sniffing around OpenAI and Microsoft. Looking like competitors could suit both companies in the near term.

Ray Wang, CEO of Constellation Research, said:

"For OpenAI to be taken seriously, Microsoft must let it partner with the entire ecosystem or face threats of anti-trust. The symbiotic relationship today was born out of Microsoft's desire to catch up and leap ahead in AI. But going forward, Microsoft is making investments to build its own capabilities. It would behoove Sam Altman to just partner with Microsoft. For AI to succeed, the approach Meta is taking will ultimately win - open source, open, and part of a larger ecosystem for data collectives."

Barry Briggs, analyst with Directions on Microsoft and former CTO of Microsoft's IT org, said:

"Tiny OpenAI has not one but three tigers by the tail, managing multibillion dollar relationships with not only Microsoft but Apple and Oracle as well. With growing demands from each of these mega players, OpenAI will, over time, be forced to navigate its own course among them – which may result in its “special relationship” with Microsoft becoming more distant. Microsoft in turn, hardly a wallflower in AI, has not only created its own language models but has already started partnering with other firms, Mistral being an example. Symbiotic? Maybe. Exclusive? Hardly."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity openai apple ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Connecting the human dots between Apple, Bill Walton and skill building

As new technologies such as generative AI and robotics proliferate, the connection between humans will become even more important. That's a high-level takeaway from DisrupTV Episode 366, which took a few interesting turns.

Christopher Lochhead, thirteen-time No. 1 bestselling author and a "godfather" of category design, and Matt Beane, Author of The Skill Code: How to Save Human Ability in an Age of Intelligent Machines and UCSB professor, were the guests that connected the human dots between three seemingly disparate topics.

Apple

Lochhead, a godfather of category design, said Apple pulled off a massive coup with its AI presentation this week largely because he took a new technology and made it human. "Tim Cook pulled something absolutely legendary here," said Lochhead. "What we had this week was a master class in category design and business strategy. In category design, one of the things we teach entrepreneurs and marketers is listen to the words. Listen to the words. Most people don't pay attention to the words. Apple this week did not announce a new product. Apple announced a new category design, a new category of AI called a personal intelligence system. And they branded it Apple Intelligence."

He added that most people forget that Apple was the category designer of personal computing. Apple put the focus on where it should be for Apple--on the people.

"Strategically, it's beyond genius," he said. "AI is not a new category of technology. AI is every category of technology. It's not a product. It's an enabling technology. Apple is going to use AI as a personal system."

"The last piece of this is that you don't have a strategy unless you can put it on one page. You lead the future and that's exactly what Tim Cook did. It was the result of clarity of strategy and a focus on the categories where Apple wins."

Bill Walton, the teacher

Lochhead met Walton through complete serendipity. He was speaking at an Oracle event where Walton was the closing speaker.

"If you know anything about Bill and that magical mystical deadhead, he read everything. He read because he had that stutter. He read because his mother was a librarian. He spent his entire childhood reading, playing basketball and on his bike. He was an incredibly learned man."

Naturally, Walton read Lochhead's books, notably Play Bigger, and they became fast friends.

"After one of my dear friends was murdered, Bill called me three times a week for the six months after it happened. He was on the road doing calling games doing all this stuff at an incredibly busy time in his life and he always wanted to make sure how I was. A text message from Bill Walton or an email from Bill Walton would just go on and on about how he loved you, and 'thank you for my life.' He said, thank you for my life to everybody. Thank you for my life.

"He was a dichotomy because he could talk about his stories and his life forever, and you would think a person like that might be egotistical. Yet as he was doing it, he was connecting with you, empathizing with you and he wanted to know how you were. He deeply gave a shit about other people.

"He made you feel like the greatest person in the world, he was the greatest, he taught me and everybody how to be a fan.

"He's left me with many things, and one is teaching. He said to me at the time I was calling myself retired just like an uncle: "Chris, you can't use the word retired. You're not retired. You're just like John Wooden. You're a teacher. Go be a teacher."

"What there is to do? I think it is to live like Bill. Bill embraced different. He followed the things that he loved and the people that he loved, he allowed himself to fall in love quickly and to support other people."

Skills and humans in the AI and robotic age

That human connection is also going to be critical for skill building, argued Beane. In his book, The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, Beane examined various technologies through a lens of skill building--that ongoing connection between an expert and a novice. "To have table stakes, you got to have that knowledge to be able to play, but to build skill. There's 160,000 years’ worth of archaeological evidence that we build skill with elbow-to-elbow contact with somebody, who knows more, trying to get some real work done," said Beane.

Beane used robotic surgery as example of how new technologies are inventing new ways to work and build skill. What's lost is that human skill building connection and mentorship.

"A novice by definition is slower and makes more mistake than an expert. You put a tool in an expert hand, that allows them to do more better by themselves. They're gonna love that deal. They take that deal. And it means they're just gonna need help from that novice less."

The trick to leveraging today's new technologies decades from now will be building productivity gains in a way where people also build their capabilities. Beane argued that roles that require a physical presence will adapt better to new technologies and build skills relevant to those workers who are remote. “If you have authority, run a budget, can invest and are developing tools you can build skills, but you have great responsibility to bring novices along for the ride," said Beane.

"If you're going to make healthy progress toward skill and keep it healthy for other people, the challenge, complexity, and connection matters. Human connections that built Walton's story, bonds of trust and respect. We don't think of those as connected to your skill journey. They are essential. The challenge is that the world has become a bit of a padded playground in places, and that is dangerous to skill. You've got to struggle; you've got sweat and you have to be uncomfortable humans. Humans don't like being uncomfortable, but it is required."

Future of Work Innovation & Product-led Growth New C-Suite apple Leadership Chief People Officer Chief Information Officer Chief Marketing Officer Chief Information Security Officer Chief Experience Officer

GPUs, Arm instances account for larger portion of cloud costs, says Datadog

GPU instances are taking a larger share of cloud enterprise spending and now are 14% of compute costs compared to 10% a year ago, according to a Datadog report analyzing AWS customer usage.

The report highlights how enterprises are experimenting with training and inference for large language models. A report from Flexera also highlighted how enterprises were experimenting with AI workloads. Datadog said:

"GPU-based EC2 instance types generally cost more than instances that don’t use GPUs. But the most widely used type—the G4dn, used by 74 percent of GPU adopters—is also the least expensive. This suggests that many customers are experimenting with AI, applying the G4dn to their early efforts in adaptive AI, machine learning (ML) inference, and small-scale training. We expect that as these organizations expand their AI activities and move them into production, they will be spending a larger proportion of their cloud compute budget on GPU."

That increased spending is good for Nvidia as well as AWS customers using the cloud vendor's Trainium and Inferentia chips. The focus on GPU instances may also benefit AMD, which is rolling out its new accelerators.

Arm appears to be another downstream winner as cloud workloads are GPU based. Arm-based CPUs are also popular on AWS as enterprises leverage Graviton2 processors. Arm-based instances only account for 18% of EC2 compute costs, but that's double from a year ago.

Arm's data center takeover: A lumpy revolution | Arm launches compute subsystems optimized for AI for edge devices | Nvidia outlines roadmap including Rubin GPU platform, new Arm-based CPU Vera

Datadog noted:

"Arm-based instances still account for only a minority of EC2 compute spending, but the increase we’ve seen over the last year has been steady and sustained. This looks to us as if organizations are beginning to update their applications and take advantage of more efficient processors to slow the growth of their compute spend overall."

Overall, enterprises are mixing various compute instances and containers to optimize costs, but companies aren't adopting the latest technologies. Datadog found that 83% of organizations are still using previous-generation EC2 instance types.

More on genAI dynamics:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Cloud CCaaS UCaaS Enterprise Service GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Adobe delivers strong Q2, ups outlook on AI monetization, new customers

Adobe reported a better-than-expected second quarter as the company expanded its customer base due to generative AI features.

The company reported second-quarter earnings of $3.49 a share on revenue of $5.31 billion, up 10% from a year ago. Non-GAAP earnings in the quarter were $4.48 a share.

Wall Street was expecting Adobe to deliver second quarter earnings of $4.39 a share on revenue of $5.29 billion.

Going into the earnings report, Wall Street was most concerned about Adobe's ability to monetize generative AI. Those concerns appeared following Adobe's first quarter report and was only magnified as enterprise software vendors disappointed investors.

Adobe CEO Shantanu Narayen said the company's "highly differentiated approach to AI and innovative product delivery" is delivering value to current customers and attracting new ones.

The company's Digital Media revenue was $3.91 billion, up 11% from a year ago. Most of that was Creative Cloud revenue, but Document Cloud delivered 19% revenue growth compared to a year ago.

Digital Experience revenue was $1.33 billion, up 9% from a year ago.

In prepared remarks, Narayen said:

"In Creative Cloud, we have invested in training our Firefly family of creative generative AI models with a proprietary data set and delivering AI functionality within our flagship products including Photoshop, Illustrator, Lightroom and Premiere."

He added that Firefly has been used to generate more than nine billion images.

As for Document Cloud, Narayen said Acrobat AI Assistant is now available as an add-on subscription for Reader and Acrobat enterprise customers.

Adobe raised its guidance with third quarter revenue of $5.33 billion to $5.38 billion with non-GAAP earnings of $4.50 a share to $4.55 a share. For fiscal 2024, Adobe projected $21.4 billion to $21.5 billion with non-GAAP earnings of $18 a share to $18.20 a share.

Other key points:

  • Adobe is extending its applications to integrate third-party multi-modal LLMs.
  • The company is seeing early success monetizing AI across its Digital Media and Digital Experience platforms.
  • Adobe is seeing strong usage and demand for AI across all customer segments.
  • The company said it was seeing new demand for Creative Cloud apps powered by new releases and digital channels.

Data to Decisions Marketing Transformation Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity adobe ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Marketing Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Epicor beefs up AI-driven ERP vision via acquisition, new launches

Epicor has acquired two companies in recent months as it rounds out its strategy to infuse artificial intelligence across its ERP platform.

The company, which recently passed the $1 billion mark in annual recurring revenue, on Wednesday announced the acquisition of KYKLO, which provides product information management and lead-gen tools for manufacturers and distributors.

Epicor CEO Steve Murphy said the purchase is part of the company's AI-driven cognitive ERP vision that aims to turn systems of record into an insights engine.

Constellation ShortList™ Enterprise Cloud Finance

KYKLO will complement Epicor's Commerce software with product information, real-time catalogs and content syndication.

The KYKLO acquisition follows last month's purchase of Smart Software, which provides cloud inventory planning and optimization applications. Smart Software was already a Epicor independent software vendor partner and integrated into multiple Epicor ERP modules.

At Epicor's Insights 2024 user conference last month, the company launched its Epicor Grow portfolio, which includes AI and business intelligence tools aimed at the supply chain.

Epicor Grow includes generative AI, machine learning, analytics and natural language processing for more than 200 industry use cases.

The Epicor Grow portfolio includes, Epicor Prism, which is a genAI service across Epicor Industry ERP Cloud, and Epicor Grow AI, which surfaces insights across industries. Arturo Buzzalino, VP of Products and Innovation at Epicor, said Prism is the company's first LLM pipeline.

Epicor also launched Epicor Grow Inventory Forecasting, which leverages forecasting engines from Smart Software, Epicor FP&A and Epicor Grow BI.

The company also launched Epicor Grow Data Platform to manage enterprise data, create pipelines and leverage business intelligence.

 

 

Matrix Commerce Next-Generation Customer Experience Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Epicor ERP SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer