Results

AMD sees AI inference, training workloads increasing in enterprise

AMD CFO Jean Hu said cloud providers and enterprises are starting to look toward total cost of ownership when it comes to inference and training workloads for artificial intelligence workloads.

Speaking at J.P. Morgan’s annual technology conference, Hu said data center demand for the company’s GPUs, accelerators and server processors was strong. She added that in the second half demand will be stronger than in the first. One big reason for that surge may be the news coming out of Microsoft Build 2024.

Microsoft announced the general availability of the ND MI300X VM series, which features AMD's Instinct MI300X Accelerator. The ND MI300X VM combines eight AMD MI300X Instinct accelerators. AMD is looking to give Nvidia's GPU franchise competition. On a briefing with analysts, Scott Guthrie, Executive Vice President of Cloud and AI at Microsoft, said AMD's accelerators were the most cost-effective GPUs available based on what the company is seeing with its Azure AI Service.

AMD Q1 delivers data center, AI sales surge of 80%

Hu said total cost of ownership is becoming critical as companies scale generative AI use cases.

Here are the key takeaways from Hu's talk.

The Microsoft Azure deal. actually,MI300 side, MI300X and ROCm software together actually power the Microsoft's virtual machine both for the internal workload, the ChatGPT, what open source to use, and also external workload, third-party workload," said Hu. "It's the best price performance to power the ChatGPT for inference. So that's really a proof point for not only MI300X from hardware, how competitive we are, but also from ROCm software, the maturity, how we have worked with our customers to come up with the best price performance."

She added that software investments have also been critical for TCO as AMD leverages open-source frameworks.

GPU workloads will also pull demand for CPUs. Cloud providers have more than 900 public AM instances driving adoption. Hu said that enterprises are also adopting AMD server processes because they need to make room for GPUs. Hu said:

"All the CIOs in enterprise are facing a couple of challenges. The first is more workloads. The data is more, application is more, so they do need to have a more general compute. At the same time, they need to start think about how they can accommodate AI adoption in enterprise. They are facing the challenges of running out of power and space. If you look at our Gen 4 family of processors, we literally can provide the same compute with 45% less servers."

AMD's Gen 5 server processors, Turin, will also launch with revenue ramping in 2025.

MI300 demand. "We have more than 100 customer engagements ongoing right now," said Hu. "The customer list includes, of course, Microsoft, Meta, Oracle, and those hyperscale customers, but we also have a broad set of enterprise customers we are working with." Dell Technologies AI factory roadmap has two tracks--one solely Nvidia and one that will include AMD infrastructure as well as others. 

Roadmaps. Hu said AMD has talked with customers about roadmaps for GPUs and collecting feedback. She added that AMD tends to be conservative about announcing roadmaps, but you can expect it to be competitive. "We will have a preview of our roadmap in the coming weeks," she said.

On-premise AI workloads. Hu noted that AMD is working with a bevy of hyperscalers, but enterprises are a critical customer base.

"When we talk to our enterprise customers, they do start to think about that question. Do I do it on premise? Do I send it to cloud? That is a strategic approach they have to think through. We are uniquely positioned because on the server side, we're working with our customers. We're helping them with how they deploy servers. It has become significant leverage for us. AI PC, the server side and the GPU side, that's a part of our go-to-market model right now."

Hu said inferencing workloads are strong and training is scaling. "Both training and inference are important to us," said Hu.

Data to Decisions Tech Optimization AMD Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Sustainability 50 interview: Tech Mahindra's Sandeep Chandna on sustainability, AI and ROI

Tech Mahindra Chief Sustainability Officer Sandeep Chandna said sustainability has become a CFO issue as the returns on investment are obvious. "Today profitability is linked to the ESG strategy and that's been a shift over the last three to four years," said Chandna.

Chandna is a member of the 2024 class of Constellation Research's Sustainability 50. Here are some takeaways from my conversation with Chandna.

The sustainability journey. Chandna has been a Chief Sustainability Officer for more than a decade and perhaps the biggest change has been that people are more educated about the role. "When I started, everybody had their own definitions of sustainability. When I got into details, somebody said you'll have to plant trees. I said that was a good thing to do. Another said donate to charity work. Once again, that's good for society. We had to build out what sustainability meant with a purpose and a vision," explained Chandna. "We charted out environmental, social and governance structures and processes that were clear. It has to be top driven but grassroots at the same time. Today every part of our strategy has sustainability in it with stakeholders, data and champions."

Small steps. Chandna said awareness of sustainability and small actions make a big difference. Tech Mahindra has Green Marshals, who drive environmental awareness. Simple things like PCs being turned off on Friday evening in an organization of 150,000 people save a lot of emissions.

Sustainability and CFOs align. "Previously, we used to go to a CFO saying what we wanted to do. The world has changed now. The CFOs are now coming to the CSO saying, 'When will you implement the net zero strategy?' or 'What's the impact of renewable energy on the bottom line?' Today profitability is linked to the ESG strategy and that's been a shift over the last three to four years." He added that ESG boosts engagement among employees and reduces attrition.


Data matters. Chandna said the data of sustainability is the hardest to put in place. Just defining what supply chain data has the biggest impact on sustainability can be challenging. Once data like carbon pricing data is in place along with water and power usage is in place the argument for sustainability efforts is much easier to make. Tech Mahindra now has the data pool in place where sustainability has its own budget line. He added that data tracking an entire procurement process through multiple partners will remain challenging, but Tech Mahindra and third-party data can get you far.

Scope 3 emissions data. Scope 3 emissions are carbon emissions that are indirectly generated by a business outside of its physical footprint. Tracking that data is a big challenge. Chandna said:

"The world is a bit confused about Scope 3 and has a lot of simple questions. If you lease a building where does that carbon data reside? We looked at our supply chain and prepared a document saying how we would build the capabilities of our suppliers first, best practices we follow and business impact. We do a workshop for all suppliers every year and reward the ones that have implemented best practices and hit goals on their scope 1, 2, 3 goals."

Chandna added that incentivizing electric vehicles for employees would help as would offices closer to homes. Employee commuting can have a big impact.  Business travel can also move the needle.

Generative AI, a sustainability blessing and curse. Today, generative AI workloads are sucking up power and stretching resources. Chandna, however, is hopeful that generative AI can analyze and optimize electricity generation and distribution, optimize trade and come up with more sustainable material design. Transportation optimization via generative AI can also improve sustainability. Those options have more long-term impacts, noted Chandna. "Today the data centers are consuming a lot of energy," he said.

In the short-term, renewable energy for data centers makes the most sense for AI workloads, but over time AI can optimize a lot to improve carbon emissions. 

 


 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity sustainability ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Sustainability Officer Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nutanix product additions, partnerships designed to capitalize on VMware customer angst

Of course, Nutanix didn't mention VMware directly, but there are a few veiled references to its rival indicating that it smells opportunity. The Nutanix news lands a day after Rimini Street said it would offer third party support to VMware customers as they plan next steps.

The timeline since Broadcom closed its VMware purchase features a good bit of turmoil.

Here's a look at what Nutanix announced at its .Next Conference in Barcelona.

  1. Nutanix added new deployment options for its Nutanix AHV hypervisor that preserves existing server investments and gives customers flexibility. Nutanix's release made a veiled reference to wooing VMware migrations after price increases. Nutanix said new capabilities in AHV will smooth migrations by repurposing the most popular vSAN ReadyNode configurations. Nutanix also added features for cybersecurity resilience, disaster recovery and virtual machine clusters.
  2. Nutanix said it is working with Cisco to certify Cisco UCS blade servers so enterprises can redeploy existing deployed servers to run on Nutanix AHV hypervisor with compute only nodes or storage only nodes.
  3. Dell Technologies andNutanix outlined a series of deployment options, partnerships and platform enhancements at its .Next Conference in Barcelona. The gist of the news: VMware we have you surrounded. Nutanix will launch hyperconverged appliances combining Nutanix Cloud Platform and Dell servers. The combination will cover a broad range of PowerEdge servers. The companies also said Nutanix Cloud Platform for Dell PowerFlex will combine Nutanix's platform with its AHV hypervisor with Dell PowerFlex storage. Dell and Nutanix also said they will collaborate on engineering and go-to-market efforts. Keep in mind that Dell said it was exiting its VMware hyperconverged partnership.
  4. Nutanix added new integrations for Nutanix-GPT-in-a Box that includes Nvidia NIM inference microservices and Hugging Face large language models (LLM). The company also launched an AI partner program that'll enable companies to build generative AI apps on top of Nutanix Cloud Platform.
  5. Red Hat and Nutanix said they will collaborate to use Red Hat Enterprise Linux as an element of Nutanix Cloud Platform. Nutanix AOS, which is part traditional operating system with additional packages, will build on Red Hat Enterprise Linux for operating system capabilities.
  6. Nutanix launched the Nutanix Kubernetes Platform (NKP) to simplify the management of container-based applications. Enterprises can manage clusters running Nutanix and third parties on one dashboard. NKP integrates with data services, simplifies management with automation, offers multi-cluster fleet management and is cloud native.
Tech Optimization Data to Decisions vmware Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Microsoft Build 2024: Azure gets AMD MI300X Accelerators, Cobalt preview, OpenAI GPT-4o

Microsoft announced a bevy of additions to Azure including AMD Instinct MI300X instances, Cobalt 100 instances in preview and the latest OpenAI model, GPT-4o, in Azure OpenAI Service.

The announcements at Microsoft's Build 2024 conference land as both Amazon Web Services and Google Cloud are busy launching custom silicon and access to multiple model choices.

All of the hyperscalers are looking to supply supercomputers that have diversity of custom silicon and processors from AMD and Nvidia as well as networking and architecture choices for AI workloads. Constellation Research analyst Holger Mueller said:

"It is clear that the path to AI is custom algorithms on custom silicon and Microsoft is on the jouney, with both the Cobalt and the AMD Mi300 preview. A key aspect of faster CPUs is faster networking, but Microsoft is quiet on that. Amongst all the cloud vendors, Microsoft has its traditional partner connections - so the AMD chip uptake comes as no surprise. When viable we will likely see Intel in Azure as well."

Here's a look at the three headliners for Azure at Build along with other key additions.

  • Microsoft announced the general availability of the ND MI300X VM series, which features AMD's Instinct MI300X Accelerator. The ND MI300X VM combines eight AMD MI300X Instinct accelerators. AMD is looking to give Nvidia's GPU franchise competition. On a briefing with analysts, Scott Guthrie, Executive Vice President of Cloud and AI at Microsoft, said AMD's accelerators were the most cost-effective GPUs available based on what the company is seeing with its Azure AI Service. AMD Q1 delivers data center, AI sales surge of 80%
  • Azure virtual machines built to run on Cobalt 100 processors are available in preview. Microsoft announced Cobalt in November and claimed that it could deliver 40% better performance than Azure's previous Arm-based VMs. Guthrie also noted that Cobalt performs better than the current version of AWS' Trainium processor. "It's going to enable even better infrastructure and better cost advantage and performance on Azure," said Guthrie, who was referencing Cobalt on Azure. Guthrie added that Snowflake was developing on Cobalt instances. 

  • OpenAI's GPT-4o will be available exclusively on Azure. Microsoft said OpenAI's latest flagship model will be available in preview on Azure AI Studio and as an API. OpenAI just launched GPT-4o last week, a day ahead of Google's latest Gemini models.
  • To round out the Azure OpenAI Service upgrades, Microsoft enhanced fine-tuning of GPT-4, added Assistants API to ease creation of agents and launched GPT-4 Turbo with vision capabilities. Microsoft also said it is adding its multimodal Phi-3 family of small language models to Microsoft Azure AI as a model as a service offering.
  • Azure will also get new services for managing instances. Microsoft launched Azure Cloud Fleet, a service that provisions Azure compute capacity across virtual machine types, available zones and pricing models to mix and match performance and cost. The company also launched Microsoft Copilot in Azure, which is an assistant for managing cloud and edge operations.

Nadella: We're a GenAI platform company

Speaking at the Build 2024 keynote, Microsoft CEO Satya Nadella said the age of generative AI is just starting.

"We now have these things that are scaling every six months or doubling every six months. You know, what we have though, with the effect of the scaling laws is a new natural user interface that's multimodal that means support stack speech, images, video as input and output. We have memory that retains important context recalls both our personal knowledge and data across our apps and devices. We have new reasoning and planning capabilities that helps us understand very complex context and complete complex tasks while reducing the cognitive load on us."
 
Nadella said Microsoft has always been a platform company and Azure is "the most complete scalable AI infrastructure that meets your needs in this AI era."

"With building Azure as the world's computer, we have the most comprehensive global infrastructure with more than 60 plus data center regions," said Nadella, who noted that the company is optimizing power and efficiency across the stack. 

Nadella said Azure will be built out with Nvidia, AMD and its own silicon in clusters. He said Maya and Cobalt, Microsoft's custom processors, are already delivering customer workloads and responding to prompts.  

Mueller said:

"Microsoft needs to wean itself and OpenAI of Nvidia machines that are expansive and the hardest commodity to purchase in IT. Continuing the Cobalt strategy makes sense, adding AMD as well, but it will not help with the existing workloads. The question is – will Microsoft rebuild OpenAI models? – or support two different AI hardware platfoms and choose what to run where. Time will tell."

More:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Cloud CCaaS UCaaS Enterprise Service Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM open sources Granite models, integrates watsonx.governance with AWS SageMaker

IBM has open sourced its portfolio of Granite large language models under Apache 2.0 licenses on Hugging Face and GitHub, outlined a bevy of AI ecosystem partnerships and launched a series of assistants and watsonx powered tools. The upshot is that IBM is looking to do for foundational models what open source did for software development.

The announcements, made at IBM's Think 2024 conference, land as the company has made a bevy of partnerships that put it in the middle of the artificial intelligence mix either as a technology provider or services provider. For instance, IBM and Palo Alto Networks outlined a broad partnership that combines Palo Alto Networks' security platform with IBM's models and consulting. IBM Consulting also partnered with SAP and ServiceNow on generative AI use cases and building copilots.

IBM also recently named Mohamad Ali Senior Vice President of IBM Consulting. Part of his remit is melding IBM's consulting and AI assets into repeatable enterprise services.

All these moves add up to IBM positioning itself as a partner to enterprises to scale AI across hybrid cloud and platforms. 

The backdrop of IBM's announcements, according to IBM CEO Arvind Krishna, is scaling AI. During his keynote he summed up the state of generative AI. 

"There's a lot of experimentation that's going on. That's important, but it's insufficient. If you watch every one of the previous technologies, the history has shown you kind of move from innovating a lot to deploying a lot. As you deploy it is where you will get the benefits but in order to deploy, you also need to start moving from experimenting to working at scale. Think about a small project, then you got to think about it in an enterprise scale in a systemic way. How do you begin to expand it? How do you begin to make it have impact across an enterprise or across a government? And that is what is really going to make it come alive."

Here's how IBM sees itself in the AI ecosystem.

And here's a look at what IBM announced at Think 2024:

IBM open-sourced its Granite models and made them available under Apache 2.0 licenses on Hugging Face and GitHub. The code models range from 3 billion to 34 billion parameters and are suitable for code generation, bug fixing, explaining and documenting code and maintaining repositories.

Krishna said that IBM's move to open source Granite and highlight smaller models has multiple benefits. "We also want to make sure that you can leverage smaller purpose LLM models. Can a small model do as much maybe using 1% of the energy and 1% of the total cost?" he said. "We believe that actually having a smaller model, but that is tuned for a purpose. Rather than one model that does all the things that we can actually make models that are fit for purpose."

IBM launched InstructLab, which aims to make smaller open models competitive with LLMs trained at scale. The general idea is that open-source developers can contribute skills and knowledge to any LLM, iterate and merge skills. Think of InstructLab as the open-source software equivalent of AI models.

Red Hat Enterprise Linux (RHEL) AI will include the open-source Granite models for deployment. RHEL AI will also feature a foundation model runtime inferencing engine to develop, test and deploy models. RHEL AI will integrate into Red Hat OpenShift AI, a hybrid MLOps platform used by Watson.ai. Granite LLMs and code models will be supported and indemnified by Red Hat and IBM. Red Hat will also support distribution of InstructLab as part of RHEL AI.

IBM launched IBM Concert, a watsonx tool that has one interface for visibility across business applications, clouds, networks and assets. The company also launched three AI assistants built on Granite models including watsonx orchestrate Assistant Builder and watsonx Code Assistant for Enterprise Java Applications. For good measure, IBM launched watsonx Assistant for Z for mainframe data and knowledge transfer.

Via a partnership with AWS, IBM is integrating watsonx.governance with Amazon SageMaker for AI governance of models. Enterprises will be able to govern, monitor and manage models across platforms.

IBM will indemnify Llama 3 models, add Mistral Large to watsonx and bring Granite models to Salesforce and SAP. IBM watsonx.ai is also now certified to run on Microsoft Azure.

Watsonx is also becoming an engine for IT automation. IBM is building out its generative AI observability tools with Instana, Turbonomic, network automation tools and Apptio's Cloudability Premium. These various parts were mostly acquired.

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"IBM is pushing innovation across its portfolio and of course it is all about AI. The most impactful is probably the open sourcing the IBM Granite models. With all the coding experience and exposure that IBM has, these are some of the best coding LLMs out there, and you can see from the partner momentum, that they are popular. The release of IBM Concert is going to be a major step forward for IBM customers running IBM systems and 3rd party systems. Of note is also the release of Qiskit, the most popular quantum software platform, that IBM has significantly increased in both capabilities and robustness. These three stick out for me as the most long term impact for enterprises."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity IBM ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Zoom Q1 earnings strong, sees growth in employee experience

Zoom said its Zoom AI Companion is gaining traction and the company is betting that Zoom Workplace can drive demand for its portfolio.

Zoom reported first quarter earnings of $216.3 million, or 69 cents a share, on revenue of $1.14 billion, up 3.2% from a year ago. Non-GAAP earnings in the quarter were $1.35 a share. Wall Street was expecting Zoom to report first quarter non-GAAP earnings of $1.19 a share on revenue of $1.13 billion.

Last week, Zoom said its Workvivo employee engagement tool will be the sole migration partner for Meta's Workplace offering, which is being shut down. 

By the numbers:

  • Zoom had 3,883 customers contributing more than $100,000 in trailing 12 months revenue.
  • The company had 191,000 enterprise customers, but Zoom noted that it moved 26,800 enterprise customers to an online sales channel.
  • Churn in the quarter was 3.2%.
  • Zoom Phone has 5 customers with 100,000 seats.
  • Zoom AI Companion has more than 700,000 customer accounts enabled.
  • 90 Contact Center accounts had more than $100,000 ARR. 
  • The company ended the quarter with $7.4 billion in cash, cash equivalents and marketable securities.

Speaking on an earnings call, Zoom CEO Eric Yuan said AI enhancements were boosting demand of Zoom Workplace, Zoom Phone, Team Chat, Events and Whiteboard. Yuan added that Workplace integration with Workvivo will also enhance employee engagement. Yuan said the deal with Meta to migrate Workspace customers to Workvivo will also be positive.

"Our success in employee experience represents an important beachhead for us in upselling customers on the full suite," said Yuan.

As for the outlook, Zoom projected second-quarter revenue of $1.145 billion to $1.15 billion with non-GAAP earnings of $1.20 a share and $1.21 a share. Sales were in line with estimates, but missed Wall Street earnings targets of $1.24 a share.  For fiscal 2025, Zoom projected revenue of $4.61 billion and $4.62 billion with non-GAAP earnings of $4.99 a share to $5.02 a share. Both projections were ahead of expectations.

"We still believe that Q2 will be the low point from a year-over-year growth perspective and to improve from there," said Yuan. 

Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity zoom ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Palo Alto Networks Q3 solid, says customers into platform play

Palo Alto Networks reported better-than-expected third quarter results as the company said customers had an "enthusiastic response to platformization."

The cybersecurity company reported third quarter earnings of $278.8 million, or 79 cents a share, on revenue of $2 billion, up 155 from a year ago. Non-GAAP earnings were $1.32 a share.

Analysts expected Palo Alto Networks to report fiscal third quarter earnings of $1.25 a share on revenue of $1.97 billion.

Palo Alto Networks' third quarter landed three months after the company outlined a new go-to-market plan and said customers were more discerning with their budgets. Palo Alto Networks recently forged a partnership with IBM Consulting and agreed to purchase QRadar's SaaS assets from Big BlueAs security vendors race to consolidate platforms and customer wallet share, IBM and Palo Alto Networks moved to forge an alliance that on paper looks like a win-win.

Nikesh Arora, CEO of Palo Alto Networks, said: "We are pleased with the enthusiastic response to platformization from our customers in Q3. Platformization is a long-term strategy that addresses the increasing sophistication and volume of threats, and the need for AI-infused security outcomes."

With IBM as a partner, Palo Alto Networks is looking to fend off multiple players including CrowdStrike. IBM can focus on its core strengths and leverage security services and AI models. CrowdStrike recently expanded its partnership with AWS.

Constellation Research analyst Chirag Mehta said:

“While platformization helps vendors consolidate data on a single platform to deliver integrated experience, it limits choices for customers. Cybersecurity is a team sport; increased telemetry from other systems lead to better security posture. In the age of automation and AI, access to higher quality diverse signals lead to better outcomes. Best of suites approach works well when business processes are well-defined and the domain is mature. That might be the case for ERP, but most certainly not for cybersecurity.”

Mehta, when interviewed by the Wall Street Journal for a story on Palo Alto Network’s recent acquisition of QRadar, cautioned on a vendor lock-in. “As you go down that path and invest further, it gets harder and harder to get out. Vendors can essentially increase their prices without customers having a choice, because the switching cost is very high," he said.

Chirag Mehta on the intersection of cybersecurity, design thinking and AI

For the fourth quarter, Palo Alto Networks projected revenue between $2.15 billion to $2.17 billion, up 10% to 11%, with non-GAAP earnings of $1.40 a share to $1.42 a share. For fiscal 2024, Palo Alto Networks projected revenue of $7.99 billion to $8.01 billion, up 16%, with non-GAAP $5.56 a share to $5.58 a share.

Speaking on an earnings conference call, Arora said:

"Despite the concerns around our platformization approach after our last quarter, the customer feedback has been nothing but encouraging. We have initiated way more conversations about platformization than we expected. If meetings were a measure of outcome, they've gone up 30%. And a majority of them have been centered on platform opportunities. In short, demand is robust. And my expectation is that we will continue to see it be that way for the next many quarters."

Arora also outlined a framework for platformization and the business model. He explained that Palo Alto Networks had an approach that revolved around landing new customers, but cross-platform adoption went slowly. "We realized that for fully platformed customers while they saw better security outcomes, our ARR profile is also very different," said Arora. "While our average next generation security ARR for landing customers ranges from $200,000 to $800,000 for land strategy, we discovered that our ARR for fully PRISMA customers ranges from $2 million to $14 million, depending on how many platforms the customers are using."

Arora added:

"Our rollout of platformization has spurred a long standing debate within the cybersecurity industry about whether customers desire a platform or best of breed cybersecurity. We've proven it is possible to deliver best of breed on a platform. This is why we have invested in building leading products, while also delivering the benefits of integration across multiple platforms."

Digital Safety, Privacy & Cybersecurity palo alto networks Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Rimini Street to offer support for VMware customers

Rimini Street said it will offer support services for VMware products so customers can continue to run their perpetually licensed software.

The move comes as VMware customers are pondering next moves following the company's acquisition by Broadcom. Broadcom has retooled VMware's business model in a pivot to subscriptions and altered customer bundles. The VMware timeline since Broadcom closed the purchase has been eventful to say the least.

Broadcom has had a steady cadence of blogs that appear to be aimed at allaying VMware customer concerns. To date, Nutanix has been the biggest beneficiary of VMware customer angst. It's unclear whether Broadcom's blog barrage is hitting the mark, but the missives collectively acknowledge that VMware customers may be a smidge disgruntled. The rundown:

Rimini Street said it will launch Rimini Support, Rimini Protect and Rimini Consult for VMware. The support services include priority support response by an engineer in 10 minutes or less. According to Rimini Street, the support fees are similar to the current fees paid by perpetual VMware licensees.

Constellation Research CEO Ray Wang said:

"VMWare customers have felt pressured to a forced move onto the cloud at exorbitant markups.  This new Rimini Street offering gives VMWare customers choice in remaining on-premises without a forced march to the cloud and exorbitant markups in maintenance.  This is a case where the government’s antitrust failed customers."

Seth Ravin, CEO of Rimini Street, said VMware customers have been looking for alternatives to extend licensed software and buy time.

Rimini Street offers support services for SAP and Oracle as well as AWS and Salesforce. The VMware situation fits into Rimini Street's core competencies. It has offered support for SAP and Oracle customers who didn't want to upgrade and/or pay support and maintenance fee increases.

In a statement, Rimini Street noted that VMware customers need time to analyze Broadcom's terms and evaluate virtualization platforms. The company said:

"VMware perpetual licensees need time to analyze the software vendor’s proposed changes and determine if they are going to eventually accept their new licensing model and fees, attempt to negotiate new licensing terms and fees or evaluate, select and implement a new virtualization platform. In any of these cases, selecting Rimini Support for VMware buys an organization the time - up to years."

Here's the breakdown of what Rimini Street is launching:

  • Rimini Support for VMware, a 24/7/365 support service that has dedicated services for configuration, performance, installation, upgrades and customization. A primary engineer with an average of 15 years’ experience and guaranteed 10-minute response time.
  • Rimini Protect for VMware with security assessments, vulnerability reports and zero-day reporting.
  • Rimini Consult for VMware, a set of consulting services for road mapping, license advisory, cloud migrations, and other issues.

In its first quarter, Rimini Street reported revenue of $106.7 million with sales evenly split between US and international customers. The company has 3,040 active clients as of March 31.

On Rimini Street's first quarter earnings conference call, Ravin said customers are looking to extend software for anywhere from 3 to 10 years as they think through next platforms. Rimini Street used to cater to CIOs, but increasingly CFOs and procurement executives are interested in extending support via a third party, said Ravin.

He also noted that VMware customers were reaching out to Rimini Street.

Rimini said earlier this month:

"One of the largest apps that we’ve had so far is VMware because of the moves that happened over there with Broadcom’s acquisition. A lot of customers are looking for a solution to their VMware challenge now. I think this is an example where we can come in, in a marketplace where there is a sudden demand or a sudden surge and bring our expertise to the table and a solution potentially."

 

 

Tech Optimization Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite rimini street vmware ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing Metaverse developer SaaS PaaS IaaS Supply Chain Quantum Computing Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Healthcare VR CCaaS UCaaS Customer Service Content Management Collaboration M&A Enterprise Service Chief Information Officer Chief Executive Officer Chief Technology Officer Chief Data Officer Chief Digital Officer Chief Analytics Officer Chief Financial Officer Chief Operating Officer Chief Marketing Officer Chief Revenue Officer

Dell Technologies goes all-in on AI factories

Dell Technologies is going all-in on AI factories and enabling generative AI workloads powered by a wide range of third parties as well as Nvidia-flavored efforts.

What Dell is ultimately trying to do is drive enterprise adoption based on data needs, services, an open ecosystem, infrastructure innovation and use cases. With validated and optimized designs, Dell is looking to accelerate AI adoption by simplifying deployments and improving returns on investment. Dell has seen strong demand in AI-optimized servers and has forged strong partnerships with Nvidia, Hugging Face and a bevy of AI players. 

Speaking at the Dell Technologies World, Chairman Michael Dell said the company is reinventing itself again as it does with multiple technology pivots. 

Dell said the generative AI era will be a "sprint and a marathon." "The uptake and demand for AI is unprecedented," he said. "It will touch every industry in every organization. but each industry and each organization will have its own specific requirements and needs. 

He added:

"If you have engineers and a call center and you're not using an LLM based Assistant, you're already behind. But large language models don't create value on the factory floor. For those you need vision models on diverse models, and real world models. Other models will tackle other problems. We're still in the early stages of model training.  I'm blown away by the progress of open and closed large and small models by the exploding set of enterprise use cases. For inference and for assistance. We're seeing small models that are incredibly capable, that are faster and inferencing and more efficient and lower cost. Eventually the application of AI will be as broad as the internet."

At Nvidia GTC, Dell Technologies was highlighted as a key partner to build out AI factories. At Dell Technologies World, the companies delivered. In addition, Dell is also offering multiple building blocks to build out with packaged systems for use cases, product and consumption model. Sam Grocott, Senior Vice President, Product Marketing at Dell Technologies, said the company is building out AI factory offerings that cover the "broadest ecosystems across applications, infrastructure and partners" as well as the "Nvidia-flavored easy button."

When you boil down Dell Technologies World news, it boils down to the broad AI factory strategy and then the building blocks and packages of systems underneath.

The broad AI factory play will include a bevy of components including AMD and Intel AI accelerators and processors. Grocott said:

"We've got strong partnerships with Intel, AMD and Nvidia. Obviously, we're going to continue to harvest that flexibility and choice for our customers depending on which partnership or solution they want to lean into. We've got them all lined up across, clients, data center and all the way up to the cloud."

"We're going to be choosing best of breed partners to pull in and implement our IP along with our partner IP to simplify, package up and then make it consumable."

But given Nvidia's lead, Dell's AI factory stack built on Nvidia GPUs and software are fully baked. Nvidia CEO Jensen Huang said Dell Technologies AI factory effort will be the largest go-to-market partnership the GPU maker has. "We have to go modernize a trillion dollars of the world's data centers," said Huang.

Dell PowerEdge XE9680L is designed for Nvidia HGX B200, supports 8 GPUs in a dense 4U form factor, has network throughput with 12 PCIe slots and liquid cooling. The system also supports 400G Ethernet as well as Nvidia's InfiniBand while supporting up to 72 GPUs per rack.

According to Dell, the Nvidia-powered AI factory will also make use of Nvidia AI Enterprise and frameworks outlined at Nvidia GTC. Dell will also wrap services around Nvidia deployments of software and infrastructure.

Dell NativeEdge automates delivery of Nvidia frameworks to edge devices.

And here's a look at the building blocks.

Infrastructure

  • Dell AI PCs. Why are AI PCs included in the AI factory mix? Dell officials see edge devices as critical parts of AI workloads. Dell announced five new laptops featuring Qualcomm's Snapdragon X Series processors. The Latitude 7455 and 5455, Inspiron 14 Plus and 14 and XPS 13 will be available with Qualcomm chips, a dedicated AI key and the ability to run 13 billion parameters running on a PC.
  • Dell PowerScale F910, an all-flash storage system that has 20x performance than PowerScale F900 and 6x performance vs Azure NetApp files. PowerScale F910 is the first ethernet storage for Nvidia DGX SuperPod and is available on-premises or in the cloud.
  • Dell Z9864F-ON networking for ethernet fabrics for AI. The systems features Enterprise Sonic 4.4 with SmartFabric Manager and Broadcom's Thor2 network interface card. The system can minimize congestion and supports 8,000 GPU nodes.

Ecosystem

  • Dell and Hugging Face have created an Enterprise Hub, an authenticated portal with optimized open-source models for on-premises deployments. The hub includes models optimized for Dell infrastructure, dedicated containers, scripts and technical documents and software dependencies.
  • Dell PowerEdge XE9680, XE8640 and R760xa will be optimized for Meta's Llama 3 for on-premises deployments.
  • Dell's AI infrastructure will integrate Microsoft Azure AI services with Apex Cloud Platform for Microsoft Azure.

Services

  • Dell will launch a set of services to develop Microsoft Copilot in Windows, sales, security and GitHub. The services are designed to plan, design, implement and test and operate at scale.
  • The company added Accelerator Services for Dell Enterprise Hub for generative AI prototyping and deploying with Hugging Face models.
Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity dell nvidia Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Copilot, genAI agent implementations are about to get complicated

Copilot and generative AI assistant implementations are getting so complicated that the consultants are marching in.

As tech vendors roll out various copilots and assistants across applications sprawl is going to become a real issue in a hurry. Enterprise software vendors all have AI helpers across their suites and platforms often with an per user surcharge.

Implementing these layers of copilots is going to require consultants as enterprises inevitably customize.

It all sounds a bit ERP to me.

Consider:

All of these consultants will come in handy. How do you decide between horizontal copilots and agents vs. use case and industry focused ones? The copilot game will be all about architecture, ModelOps and the ability to swap large language models as needed.

However, I raise an eyebrow when I see consulting firms swarm to the copilot pot of gold. That swarm of consultants mean that enterprises are struggling to execute the generative AI dream and implementation costs are going to go higher. The other axiom of enterprise technology: If there's a way to make projects overly complicated we will.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

Here are a few reasons why copilot implementations are going to be a challenge:

Multiple models. Yes, you want model choice. Yes, you want to swap models as they mature and leapfrog each other. But enterprises in copilot implementations can pick a model today that's outdated a month later. Anthropic's Claude 3 was an enterprise fave. Then Meta's Llama 3 became an option. And OpenAI launched GPT-4o which may up the ante even more. LLM development is moving fast enough to give enterprises analysis paralysis.

Abstraction layers are developing, but haven't matured. Amazon Bedrock holds a lot of promise because enterprises can swap models. The platform is developing quickly, but the LLMs are moving faster.

Copilot sprawl. With every vendor, platform and service including a copilot, agent or assistant orchestration is going to be a big issue. Let's just take an average large enterprise stack.

  • Multicloud: Google Cloud Gemini and Amazon Q
  • ERP: SAP and Joule
  • CRM: Salesforce with Einstein Copilot
  • Productivity: Microsoft 365 with Copilot
  • Productivity part 2: A few departments with Google Workspace and Gemini
  • Platform: ServiceNow with Now Assist
  • Custom applications: OpenAI ChatGPT, Anthropic Claude, Meta Llama 3, Databricks DBRX

Assuming you take on the AI assistants from just half of your vendors you are going to be crushed with sprawl and expenses.

Costs. The upcharge for various AI assistants was already starting to take a tool. Now you'll be adding consultants to the mix.

ROI. Generative AI returns to date have been judged based on productivity metrics. As these implementations see higher costs those returns will be shaved. Another wrinkle that may hamper returns: A CXO on our most recent BT150 call noted that he was struggling to make his premium copilot work across meetings and applications.

It's always about the data strategy. There's a reason that AI is being deployed successfully in regulated industries: These enterprises have to have their data in order for compliance. Other firms have to develop their data management and governance mojo to even consider genAI.

If all else fails you could just ask the latest LLM how you should manage the sprawl.

See more:

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer