This list celebrates changemakers creating meaningful impact through leadership, innovation, fresh perspectives, transformative mindsets, and lessons that resonate far beyond the workplace.
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
Dell Technologies and Supermicro are building an AI factory with Nvidia for Elon Musk's xAI.
Dell Technologies outlined its AI factory strategy at Dell Technologies World. One part of the Dell strategy revolves around tight integration with Nvidia. The other half of that strategy will include AMD and other AI infrastructure players.
For Supermicro, the xAI deal will be a big win and also represents a close relationship with Nvidia. Supermicro built the first supercomputer for Nvidia a decade ago to work on AI. Supermicro CFO David Weigand said at a recent investment conference that the only thing holding the company's growth back has been supply. Supermicro's third quarter revenue was $3.85 billion, up 200% from a year ago.
Based on Supermicro's fourth quarter revenue outlook of $5.1 billion to $5.5 billion, the company is north of a $20 billion annual revenue run rate.
"The only thing that has restrained us to date is supply. There's no question that we would be further ahead in the numbers because that's why what's caused our backlog to grow, we'd be further ahead if we had more supply," said Weigand.
"Everyone is running and rushing to the party. This is nothing new to us. It's really a lot of the same players out there. With the number of employees that we have, we're half engineers. We're very focused on what we do. We're not trying to be all things to all people. We're trying to build the very best customized servers and for some of the best companies in the world."
Vice President and Principal Analyst
Constellation Research
Holger Mueller is VP and Principal Analyst for Constellation Research for the fundamental enablers of the cloud, IaaS, PaaS and next generation Applications, with forays up the tech stack into BigData and Analytics, HR Tech, and sometimes SaaS. Holger provides strategy and counsel to key clients, including Chief Information Officers, Chief Technology Officers, Chief Product Officers, Chief HR Officers, investment analysts, venture capitalists, sell-side firms, and technology buyers.<br>
Coverage Areas:
Future of Work
Tech Optimization & Innovation<br>
Background:
Before joining Constellation Research, Mueller was VP of Products for NorthgateArinso, a KKR company. There, he led the transformation of products to the cloud and laid the foundation for new Business…
Read more
Vice President & Principal Analyst
Constellation Research
About Liz Miller:
Liz Miller is Vice President and Principal Analyst at Constellation, focused on the org-wide team sport known as customer experience. While covering CX as an enterprise strategy, Miller spends time zeroing in on the functional demands of Marketing and Service and the evolving role of the Chief Marketing Officer, the rise of the Chief Experience Officer, the evolution of customer engagement and the rising requirement for a new security posture that accounts for the threat to brand trust in this age of AI. With over 30 years of marketing, Miller offers strategic guidance on the leadership, business transformation and technology requirements to deliver on today’s CX strategies. She has worked with global marketing organizations to transform…
Read more
This week on ConstellationTV episode 82, hear co-hosts Liz Miller and Holger Mueller analyze the latest enterprise #technology news and events (Sales Cloud & GROW from SAP Sapphire, #CX at Pegaworld, #security).
Then watch an interview between R "Ray" Wang and ServiceNow CSO Nick Tzitzon on the latest advancements, efficiencies, and opportunities from the platform company, and learn Holger's top five takeaways from Amazon Web Services (AWS) re:Inforce 2024.
0:00 - Introduction: Meet the Hosts
1:42 - Enterprise #technology news coverage
14:16 - #AI advancements and #innovation from ServiceNow
24:14 - AWS re:inforce 2024 analysis
30:08 - Bloopers!
ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
SurrealDB raised $20 million in venture capital to bring its total to $26 million. The bet: Multi-model databases will be critical to enterprises looking to consolidate multiple databases so developers can move faster.
The financing round was led by FirstMark and Georgian. With AI workloads and multiple data silos, SurrealDB is looking to address developer pain points. The multi-model database also is completely written in the Rust programming language.
Holger Mueller, Constellation Research analyst, noted that SurrealDB is part of a band of next-generation databases that look to underpin modern applications.
Mueller said:
"The next generation applications of the 2020s are multi-model and challenging to create. At the same time developer capacity is restricted and top database developers command top dollar. Making it easier for enterprises to build these apps is what a multi-model database can offer--a single place where applications can tap documents, columnar, analytical and transactional data. Congrats to SurrealDB, which has a modern foundation being built on the language of the decade, Rust. Rust will give the offering extra heft with developers."
Key points about SurrealDB:
SurrealDB has advanced security and access permissions.
The database includes indexing for AI workflows, machine learning inference and model processing.
The company also announced the beta launch of Surreal Cloud.
SurrealDB also has a management application called Surrealist.
As for competition, SurrealDB plays in a market that includes MarkLogic, ArangoDB, OrientDB, Azure Cosmos DB, FoundationDB, Couchbase, and Apache Ignite among others.
Among those competitors, Couchbase is publicly traded. It has revenue of about $50 million a quarter and exited the first quarter with annual recurring revenue of $207.7 million. MarkLogic was acquired by Progress in 2023.
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
Hewlett Packard Enterprise and Nvidia teamed up to launch a set of private cloud offerings and integrations designed for generative AI workloads. Nvidia AI Computing by HPE will be available in the fall.
With the move, announced at HPE Discover 2024 in Las Vegas, HPE enters a broad portfolio into the AI computing race. Enterprises are building out on-premises infrastructure and buying AI-optimized servers in addition to using cloud computing.
The main attraction at HPE Discover 2024 is HPE Private Cloud AI, which deeply integrates Nvidia's accelerators, computing, networking and software with HPE AI storage, servers and Greenlake. HPE Private Cloud AI will also include an OpsRamp AI copilot that will help manage workloads and efficiency.
According to HPE, HPE Private Cloud AI will include a self-service cloud experience and four configurations to support workloads and use cases. HPE also said that Nvidia AI Computing by HPE offerings and services will also be offered by Deloitte, HCL Tech, Infosys, TCS and Wipro.
Antonio Neri, CEO of HPE, said during his keynote that enterprises need more turnkey options for AI workloads. He was joined by Nvidia CEO Jensen Huang. At Computex, Nvidia said it will move to an annual cycle of GPUs and accelerators along with a bunch of other AI-optimized hardware. Neri said that HPE has been at the leading edge of innovation and supercomputing and will leverage that expertise into AI. "Our innovation will lead to new breakthroughs in edge to cloud," said Neri. "Now it leads to AI and catapult the enterprise of today and tomorrow."
"AI is hard and it is complicated. It is tempting to rush into AI, but innovation at any cost is dangerous," said Neri, who added that HPE's architecture will be more secure, feature guardrails and offer turnkey solutions. "We are proud of our supercomputing leadership. It's what positions us to lead in the generative AI future."
Constellation Research's take
Constellation Research analyst Holger Mueller said:
"HPE is working hard fighting for market share for on-premises AI computing. It's all about AI and in 2024 and that means partnering with Nvidia. Co-developing HPE Private Cloud AI as a turnkey and full stack is an attractive offering for CXOs, as it takes the integration burden off of their teams and lets them focus on what matters most for their enterprise, which is building AI powered next-gen apps."
Constellation Research analyst Andy Thurai said HPE can gain traction in generative AI systems due to integration.
Thurai said:
"What HPE offers is an equivalent of 'AI in a box.' It will offer the combination of hardware, software, network, storage, GPUs and anything else to run efficient AI solutions. For enterprises, it's efficient to already know the solutions, price points and optimization points. Today, most enterprises that I know are in an AI experimentation mode. Traction may not be that great initially."
HPE bets on go-to-market, simplicity, liquid cooling expertise
HPE's latest financial results topped estimates and Neri said enterprises are buying AI systems. HPE's plan is to differentiate with systems like liquid cooling, one of three ways to cool systems. HPE also has traction with enterprise accounts and saw AI system revenue surge accordingly. Neri said the company's cooling systems will be a differentiator as Nvidia Blackwell systems gain traction.
Nvidia's Huang agreed on the liquid cooling point. "Nobody has plumbed more liquid than Antonio," quipped Huang.
Here's what HPE Private Cloud AI includes:
Support for inference, fine-tuning and RAG workloads using proprietary data.
Controls data privacy, security and governance. A cloud experience that includes ITOps and AIOps tools powered by Greenlake and OpsRamp, which provides observability for the stack including Nvidia InfiniBand and Spectrum Ethernet switches.
OpsRamp integration with CrowdStrike APIs.
Flexible consumption models.
Nvidia AI Enterprise software including Nvidia NIM microservices.
HPE AI Essentials software including foundation models and a variety of services for data and model compliance.
Integration that includes Nvidia Spectrum-X Ethernet networking, HPE GreenLake for File Storage, and HPE ProLiant servers with support for Nvidia L40S, H100 NVL Tensor Core GPUs and the Nvidia GH200 NVL2 platform.
The tie-up with Nvidia and HPE went beyond the private cloud effort. HPE said it will support Nvidia's latest GPUs, CPUs and Superchip across its Cray high-performance computing portfolio as well as ProLiant servers. The support includes current Nvidia GPUs as well as support for the roadmap going forward including Blackwell, Rubin and Vera architectures.
HPE also said GreenLake for File Storage now has Nvidia DGX BasePod certification and OVX storage validations.
Other news at HPE Discover 2024:
HPE is adding HPE Virtualization tools throughout its private cloud offerings. HPE Virtualization includes open source kernel-based virtual machine (KVM) with HPE's cluster orchestration software. HPE Virtualization is in preview with a release in the second half.
HPE Private Cloud will have native integration with HPE Alletra Storage MP for software defined storage as well as OpsRamp and Zerto for cyber resiliency.
HPE and Danfoss said they will collaborate on modular data center designs that deploy heat capture systems for external reuse. HP Labs will also have a series of demos on AI sustainability.
Vice President and Principal Analyst
Constellation Research
About Andy Thurai
Andy Thurai is an accomplished IT executive, strategist, advisor, enterprise architect and evangelist with more than 25 years of experience in executive, technical, and architectural leadership positions at companies such as IBM, Intel, BMC, Nortel, and Oracle. Andy has written more than 100 articles on emerging technology topics for publications such as Forbes, The New Stack, AI World, VentureBeat, DevOps.com, GigaOm and Wired.
Andy’s fields of interest and expertise include AIOps, ITOps, Observability, Artificial Intelligence, Machine Learning, Cloud, Edge, and other enterprise software. His strength is selling technology to the CxO audience with a value proposition rather than the usual technology sales pitch.
Find more details and samples of Andy’s work on his…
Read more
Hear from Constellation analyst Andy ThuraI on his top 5 takeaways💡 from IBM THINK 2024:
⚡ The future of #AI is open.
⚡ #GenerativeAI model transparency
⚡ Consulting advantage
⚡ Platform advantage
⚡ IBM Concert
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
OpenAI has built momentum by closing a big partnership with Apple, a channel deal with PwC and a series of enterprise wins. These events put an exclamation point on the enterprise traction that OpenAI is seeing directly and raise a big question: Will OpenAI eventually compete with its primary investor Microsoft?
Let's start with the big stuff. Apple's WWDC keynote outlined the company's generative AI strategy, which melds on-device processing, private cloud and a partnership with OpenAI. OpenAI will be a big part of iPhone queries that need to go to the cloud even though Bloomberg reported there is no money being exchanged. In other words, OpenAI is like an NFL Super Bowl halftime performerâitâs all about the exposure, marketing and distribution. Rest assured, that Apple has its own large language models (LLMs) to ensure it is closest to the customer experience, but OpenAI is in the mix.
That Apple partnership, however, only highlighted other recent data points. Consider:
The company landed a big deal with PwC, which will adopt ChatGPT Enterprise throughout consulting giant, and resell OpenAI to clients. PwC will be the first reseller of ChatGPT Enterprise and its largest user of the product. Once an enterprise lands one big consulting firm others follow.
The big takeaway from these deals is that enterprises are going direct to OpenAI. Plenty of enterprises are exposed to OpenAI via Microsoft. The Microsoft-OpenAI partnership has made OpenAI the biggest ingredient brand since Intel.
It's clear that OpenAI doesn't intend to be just an ingredient brand. Yes, Microsoft is a huge OpenAI investor, but the latter has bigger ambitions and a looming IPO at some point. Naming Sarah Friar CFO and Kevin Weil chief product officer only drives home that OpenAI is building out its management team ahead of an IPO.
What's next? Okta CEO Todd McKinnon said on CNBC something that a few observers have been wondering. McKinnon noted that Microsoft is effectively outsourcing its best AI R&D to OpenAI. Microsoft could become more like a consultancy than an innovator. I'm not sure that's exactly fair given Microsoft has been rolling out its own models and more choice, but McKinnon's perception isn't that surprising.
After all, we at Constellation Research have been debating this topic. Microsoft went with OpenAI to be first to market and the bet went swimmingly. The long run may look different for both sides.
My bet: OpenAI will increasingly compete with Microsoft to some degree, but the software and cloud giant will benefit either way since it is an investor. Over time, OpenAI and Microsoft will more resemble frenemies. The partnership will be a great business school case study a few decades from now. The frenemy outcome looks even more likely when you consider that regulators are sniffing around OpenAI and Microsoft. Looking like competitors could suit both companies in the near term.
"For OpenAI to be taken seriously, Microsoft must let it partner with the entire ecosystem or face threats of anti-trust. The symbiotic relationship today was born out of Microsoft's desire to catch up and leap ahead in AI. But going forward, Microsoft is making investments to build its own capabilities. It would behoove Sam Altman to just partner with Microsoft. For AI to succeed, the approach Meta is taking will ultimately win - open source, open, and part of a larger ecosystem for data collectives."
Barry Briggs, analyst with Directions on Microsoft and former CTO of Microsoft's IT org, said:
"Tiny OpenAI has not one but three tigers by the tail, managing multibillion dollar relationships with not only Microsoft but Apple and Oracle as well. With growing demands from each of these mega players, OpenAI will, over time, be forced to navigate its own course among them â which may result in its âspecial relationshipâ with Microsoft becoming more distant. Microsoft in turn, hardly a wallflower in AI, has not only created its own language models but has already started partnering with other firms, Mistral being an example. Symbiotic? Maybe. Exclusive? Hardly."
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
As new technologies such as generative AI and robotics proliferate, the connection between humans will become even more important. That's a high-level takeaway from DisrupTV Episode 366, which took a few interesting turns.
Christopher Lochhead, thirteen-time No. 1 bestselling author and a "godfather" of category design, and Matt Beane, Author of The Skill Code: How to Save Human Ability in an Age of Intelligent Machines and UCSB professor, were the guests that connected the human dots between three seemingly disparate topics.
Apple
Lochhead, a godfather of category design, said Apple pulled off a massive coup with its AI presentation this week largely because he took a new technology and made it human. "Tim Cook pulled something absolutely legendary here," said Lochhead. "What we had this week was a master class in category design and business strategy. In category design, one of the things we teach entrepreneurs and marketers is listen to the words. Listen to the words. Most people don't pay attention to the words. Apple this week did not announce a new product. Apple announced a new category design, a new category of AI called a personal intelligence system. And they branded it Apple Intelligence."
He added that most people forget that Apple was the category designer of personal computing. Apple put the focus on where it should be for Apple--on the people.
"Strategically, it's beyond genius," he said. "AI is not a new category of technology. AI is every category of technology. It's not a product. It's an enabling technology. Apple is going to use AI as a personal system."
"The last piece of this is that you don't have a strategy unless you can put it on one page. You lead the future and that's exactly what Tim Cook did. It was the result of clarity of strategy and a focus on the categories where Apple wins."
Bill Walton, the teacher
Lochhead met Walton through complete serendipity. He was speaking at an Oracle event where Walton was the closing speaker.
"If you know anything about Bill and that magical mystical deadhead, he read everything. He read because he had that stutter. He read because his mother was a librarian. He spent his entire childhood reading, playing basketball and on his bike. He was an incredibly learned man."
Naturally, Walton read Lochhead's books, notably Play Bigger, and they became fast friends.
"After one of my dear friends was murdered, Bill called me three times a week for the six months after it happened. He was on the road doing calling games doing all this stuff at an incredibly busy time in his life and he always wanted to make sure how I was. A text message from Bill Walton or an email from Bill Walton would just go on and on about how he loved you, and 'thank you for my life.' He said, thank you for my life to everybody. Thank you for my life.
"He was a dichotomy because he could talk about his stories and his life forever, and you would think a person like that might be egotistical. Yet as he was doing it, he was connecting with you, empathizing with you and he wanted to know how you were. He deeply gave a shit about other people.
"He made you feel like the greatest person in the world, he was the greatest, he taught me and everybody how to be a fan.
"He's left me with many things, and one is teaching. He said to me at the time I was calling myself retired just like an uncle: "Chris, you can't use the word retired. You're not retired. You're just like John Wooden. You're a teacher. Go be a teacher."
"What there is to do? I think it is to live like Bill. Bill embraced different. He followed the things that he loved and the people that he loved, he allowed himself to fall in love quickly and to support other people."
Skills and humans in the AI and robotic age
That human connection is also going to be critical for skill building, argued Beane. In his book, The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, Beane examined various technologies through a lens of skill building--that ongoing connection between an expert and a novice. "To have table stakes, you got to have that knowledge to be able to play, but to build skill. There's 160,000 years’ worth of archaeological evidence that we build skill with elbow-to-elbow contact with somebody, who knows more, trying to get some real work done," said Beane.
Beane used robotic surgery as example of how new technologies are inventing new ways to work and build skill. What's lost is that human skill building connection and mentorship.
"A novice by definition is slower and makes more mistake than an expert. You put a tool in an expert hand, that allows them to do more better by themselves. They're gonna love that deal. They take that deal. And it means they're just gonna need help from that novice less."
The trick to leveraging today's new technologies decades from now will be building productivity gains in a way where people also build their capabilities. Beane argued that roles that require a physical presence will adapt better to new technologies and build skills relevant to those workers who are remote. “If you have authority, run a budget, can invest and are developing tools you can build skills, but you have great responsibility to bring novices along for the ride," said Beane.
"If you're going to make healthy progress toward skill and keep it healthy for other people, the challenge, complexity, and connection matters. Human connections that built Walton's story, bonds of trust and respect. We don't think of those as connected to your skill journey. They are essential. The challenge is that the world has become a bit of a padded playground in places, and that is dangerous to skill. You've got to struggle; you've got sweat and you have to be uncomfortable humans. Humans don't like being uncomfortable, but it is required."
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
GPU instances are taking a larger share of cloud enterprise spending and now are 14% of compute costs compared to 10% a year ago, according to a Datadog report analyzing AWS customer usage.
The report highlights how enterprises are experimenting with training and inference for large language models. A report from Flexera also highlighted how enterprises were experimenting with AI workloads. Datadog said:
"GPU-based EC2 instance types generally cost more than instances that donât use GPUs. But the most widely used typeâthe G4dn, used by 74 percent of GPU adoptersâis also the least expensive. This suggests that many customers are experimenting with AI, applying the G4dn to their early efforts in adaptive AI, machine learning (ML) inference, and small-scale training. We expect that as these organizations expand their AI activities and move them into production, they will be spending a larger proportion of their cloud compute budget on GPU."
That increased spending is good for Nvidia as well as AWS customers using the cloud vendor's Trainium and Inferentia chips. The focus on GPU instances may also benefit AMD, which is rolling out its new accelerators.
Arm appears to be another downstream winner as cloud workloads are GPU based. Arm-based CPUs are also popular on AWS as enterprises leverage Graviton2 processors. Arm-based instances only account for 18% of EC2 compute costs, but that's double from a year ago.
"Arm-based instances still account for only a minority of EC2 compute spending, but the increase weâve seen over the last year has been steady and sustained. This looks to us as if organizations are beginning to update their applications and take advantage of more efficient processors to slow the growth of their compute spend overall."
Overall, enterprises are mixing various compute instances and containers to optimize costs, but companies aren't adopting the latest technologies. Datadog found that 83% of organizations are still using previous-generation EC2 instance types.
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
Adobe reported a better-than-expected second quarter as the company expanded its customer base due to generative AI features.
The company reported second-quarter earnings of $3.49 a share on revenue of $5.31 billion, up 10% from a year ago. Non-GAAP earnings in the quarter were $4.48 a share.
Wall Street was expecting Adobe to deliver second quarter earnings of $4.39 a share on revenue of $5.29 billion.
Adobe CEO Shantanu Narayen said the company's "highly differentiated approach to AI and innovative product delivery" is delivering value to current customers and attracting new ones.
The company's Digital Media revenue was $3.91 billion, up 11% from a year ago. Most of that was Creative Cloud revenue, but Document Cloud delivered 19% revenue growth compared to a year ago.
Digital Experience revenue was $1.33 billion, up 9% from a year ago.
In prepared remarks, Narayen said:
"In Creative Cloud, we have invested in training our Firefly family of creative generative AI models with a proprietary data set and delivering AI functionality within our flagship products including Photoshop, Illustrator, Lightroom and Premiere."
He added that Firefly has been used to generate more than nine billion images.
As for Document Cloud, Narayen said Acrobat AI Assistant is now available as an add-on subscription for Reader and Acrobat enterprise customers.
Adobe raised its guidance with third quarter revenue of $5.33 billion to $5.38 billion with non-GAAP earnings of $4.50 a share to $4.55 a share. For fiscal 2024, Adobe projected $21.4 billion to $21.5 billion with non-GAAP earnings of $18 a share to $18.20 a share.
Other key points:
Adobe is extending its applications to integrate third-party multi-modal LLMs.
The company is seeing early success monetizing AI across its Digital Media and Digital Experience platforms.
Adobe is seeing strong usage and demand for AI across all customer segments.
The company said it was seeing new demand for Creative Cloud apps powered by new releases and digital channels.
Editor in Chief of Constellation Insights
Constellation Research
About Larry Dignan:
Dignan was most recently Celonis Media’s Editor-in-Chief, where he sat at the intersection of media and marketing. He is the former Editor-in-Chief of ZDNet and has covered the technology industry and transformation trends for more than two decades, publishing articles in CNET, Knowledge@Wharton, Wall Street Week, Interactive Week, The New York Times, and Financial Planning.
He is also an Adjunct Professor at Temple University and a member of the Advisory Board for The Fox Business School's Institute of Business and Information Technology.
<br>Constellation Insights does the following:
Cover the buy side and sell side of enterprise tech with news, analysis, profiles, interviews, and event coverage of vendors, as well as Constellation Research's community and…
Read more
Epicor has acquired two companies in recent months as it rounds out its strategy to infuse artificial intelligence across its ERP platform.
The company, which recently passed the $1 billion mark in annual recurring revenue, on Wednesday announced the acquisition of KYKLO, which provides product information management and lead-gen tools for manufacturers and distributors.
Epicor CEO Steve Murphy said the purchase is part of the company's AI-driven cognitive ERP vision that aims to turn systems of record into an insights engine.
KYKLO will complement Epicor's Commerce software with product information, real-time catalogs and content syndication.
The KYKLO acquisition follows last month's purchase of Smart Software, which provides cloud inventory planning and optimization applications. Smart Software was already a Epicor independent software vendor partner and integrated into multiple Epicor ERP modules.
At Epicor's Insights 2024 user conference last month, the company launched its Epicor Grow portfolio, which includes AI and business intelligence tools aimed at the supply chain.
Epicor Grow includes generative AI, machine learning, analytics and natural language processing for more than 200 industry use cases.
The Epicor Grow portfolio includes, Epicor Prism, which is a genAI service across Epicor Industry ERP Cloud, and Epicor Grow AI, which surfaces insights across industries. Arturo Buzzalino, VP of Products and Innovation at Epicor, said Prism is the company's first LLM pipeline.
Epicor also launched Epicor Grow Inventory Forecasting, which leverages forecasting engines from Smart Software, Epicor FP&A and Epicor Grow BI.
The company also launched Epicor Grow Data Platform to manage enterprise data, create pipelines and leverage business intelligence.