Results

iPaaS Primer: How the Integration Platform as a Service is Evolving

iPaaS vendors are filling out their capabilities with API management, workflow automation, AI/ML, and, on the cutting edge, GenAI.

I’ve been covering the path from data to decisions for nearly nine years here at Constellation Research, and it’s a path that invariably starts with integration – integrating data sources and data-generating applications so organizations can connect business processes, gain insight, make decisions, and act. With the steady rise of the cloud over these last nine years, the integration platform as a service (iPaaS) has come to the fore. Here’s a closer look at the latest trends in iPaaS, which is one of the three core markets I cover, along with analytical data platforms (data lakes, data warehouses and lakehouses), analytics/BI and citizen data science capabilities including artificial intelligence (AI), machine learning (ML) and generative AI (GenAI).  

iPaaS have emerged as the cloud-based platforms for connecting databases, applications and mission-critical systems both in the cloud and from on-premises environments to the cloud. It’s not just about connecting sources to targets, as in the batch-oriented extract/transform/load (ETL) days of yore. Integration is increasingly a two-way street, with updates and data streams sent to AI models, source systems, automated business processes, and data platforms.

iPaaS have helped organizations move on from brittle, hard-coded, point-to-point integrations. The iPaaS becomes the consistent intermediary between points of integration, facilitated by the platform’s hundreds of out-of the-box connectors to popular apps and data sources (all of which are maintained by the vendor). The work of connecting sources and systems becomes much more accessible to non-IT types by way of drag-and-drop and point-and-click interfaces. What’s more, the components of integrations created with the iPaaS are modular and can be reused to quickly assemble new integrations. When systems change, components can be quickly updated across all integrations in which they are used, helping teams work faster and be more productive. 

When the iPaaS emerged more than a decade ago, vendors typically came out of the data-integration or application-integration arena, but what Constellation calls a next-generation iPaaS has to be able to do it all. Many iPaaS vendors also address business-to-business integration and the electronic data interchange (EDI) requirements seen in supply chain environments. In addition to offering hundreds of prebuilt connectors and templates for common integration flows, iPaaS typically provide monitoring, alerting and debugging capabilities to keep tabs on and troubleshoot integrations, pipelines and jobs.

As detailed below, the three main areas where iPaaS vendors are stepping up are:

API management. Connecting cloud apps and data sources is all about using application programming interfaces (API) that abstract away complexity and promote agility and flexibility. Unfortunately, APIs also introduce a new source of complexity in the form of API sprawl. Here’s where API management capabilities come in. iPaaS vendors are stepping up with 1. API lifecycle management capabilities, 2. Unified control planes for wrangling all those APIs, and 3. Governance frameworks to ensure that APIs are tracked and managed.

Workflow and automation. Organizations continue to face pressure to do more with fewer people, so workflow and automation capabilities are on the rise. It makes sense to automate wherever possible. Where there’s any doubt about next steps, use the iPaaS to create a workflow with humans in the loop for exception handling. Where there is confidence about exactly what an event or an analytic threshold or a prediction means, choose straight-through automation without unnecessary human intervention.

AI/ML. As the name suggests, an iPaaS is a cloud-based platform provided as a service. That puts vendors in the position to provide recommendations based on observable integration patterns. The customer’s private data remains secure and unseen by the vendor, but leading iPaaS vendors are learning from the metadata patterns and graphs of interactions behind the scenes in order to suggest appropriate data sources, pre-existing integrations, and/or next-best integration steps to users. These recommendations help save time and enhance productivity for professional and novice users alike.

GenAI. The latest innovation in iPaaS is the use of GenAI, which is being used to design and deploy new integrations and to explain and document new or existing integrations. GenAI will make the iPaaS accessible to an even broader swath of users through natural language interfaces, and it will help organizations to modernize legacy integrations by explaining, recreating, and optimizing code created by people who have long since left an organization.

Streaming capabiliites. The pace of business is always accelerating, so it’s a must to consider low-latency data integration. A next-gen iPaaS should address streaming requirements.

To summarize, modern iPaaS are benefitting professional integrators and tech-savvy business users alike. Using an iPaaS enhanced with augmented capabilities including AI/ML and GenAI, tech-savvy business types can create integrations for themselves rather than having to wait in line for IT to do the work. For the professionals, an iPaaS can accelerate and scale up their integration work, enabling them to:

  • Create, monitor, maintain and modify integrations much more quickly and productively.
  • Validate, troubleshoot and optimize integrations created by the tech-savvy business types.
  • Explain, document and streamline legacy integrations and code.

Recommendations

If there’s a risk in investing in an iPaaS, it’s that the platform might not support all the types of integration or the scale of integration that the organization will need. A next-generation iPaaS is one that is complete and able to serve as the companywide standard. If you can do it all with one platform you’ll get much more out of the investment, both in terms of the technology and the training of people, and there will be no need for point solutions.

Look beyond the next integration project to consider the breadth of integration requirements in recent history and in the foreseeable future. Do you have on-premises requirements? Will you need to work with more than one public cloud? Are investments anticipated in new enterprise apps, such as ERP or CRM systems? What are the workflow and automation requirements?

On the cutting edge, if an iPaaS vendor doesn’t have an AI/GenAI strategy by this point – let alone GenAI-based features in preview – I’d say it’s time to cut them from your short list.

Costs and licensing regimes are crucial. Does the platform you are considering offer modularity? As noted above, a complete iPaaS is a future-proof choice, but if you don’t have plans to use subsets of capabilities, is it possible to add them (and pay for them) only as and when needed? What subscription models are available? Is it per user, per connection or capacity based? The more choices available the better, as the model that makes sense today may get expensive as the number of users or integrations multiplies.

To give you a head start on your tech selection process, I recently updated my Constellation ShortListTM for Integration Platform as a Service. If you don’t see a candidate you are considering on my ShortList, feel free to contact me at [email protected] for an advisory consultation. I wish you the best of success in your technology selection process.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ipaas ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Product Officer

GitHub Elevates Code Scanning to the Next Level By Offering to Auto Fix the Code

In a major advancement for developer productivity and security, GitHub has announced “code scanning autofix,” a new feature powered by GitHub Copilot and CodeQL. Starting today, it will be available in public beta for all GitHub Advanced Security customers. This AI-driven tool helps developers identify and fix vulnerabilities in their code with suggested fixes, streamlining the development process and improving code security. Here’s how it works.

Scanning code is crucial for preventing security breaches and maintaining a strong software supply chain. Vulnerabilities in code can be exploited by malicious actors to gain unauthorized access to systems or steal sensitive data. By proactively identifying and fixing these vulnerabilities, developers can significantly reduce the risk of attacks.

Image courtesy: GitHub

Features such as autofix make life easier for developers of all skill levels. Novice programmers can leverage the suggested fixes to learn from experts and improve their coding practices. Experienced developers can benefit from the automation, allowing them to focus on more complex tasks. Ultimately, any developer working on a codebase with potential vulnerabilities can benefit from this new feature.

As AI-driven tools continue to mature, code scanning tools will become even more sophisticated. In addition, we can expect to see code scanning tools become more and more integrated directly into the development process. This will make it easier for developers to scan their code for vulnerabilities early and often, an ongoing desire from CIOs and CISOs we work with. 

Digital Safety, Privacy & Cybersecurity Chief Information Officer Chief Information Security Officer

Micron Technology: More AI, more memory, more demand ahead

Micron Technology CEO Sanjay Mehrotra said artificial intelligence workloads are boosting demand for memory chips as AI-optimized systems with GPUs and upcoming AI PCs are faring well.

In prepared remarks with Micron Technology's second quarter results, Mehrotra said AI server demand for high-bandwidth memory, data center solid-state drives and DDR5 are boosting prices. he said:

"We expect DRAM and NAND pricing levels to increase further throughout calendar year 2024 and expect record revenue and much improved profitability now in fiscal year 2025."

Mehrotra's argument is that Micron is well positioned for edge and data center inference workloads.

We are in the very early innings of a multiyear growth phase driven by AI as this disruptive technology will transform every aspect of business and society. The race is on to create artificial general intelligence, or AGI, which will require ever-increasing model sizes with trillions of parameters. On the other end of the spectrum, there is considerable progress being made on improving AI models so that they can run on edge devices, like PCs and smartphones, and create new and compelling capabilities. As AI training workloads remain a driver of technology and innovation, inference growth is also rapidly accelerating. Memory and storage technologies are key enablers of AI in both training and inference workloads."

Micron said it is seeing the following tailwinds:

  • Its high-memory offerings with better bandwidth are seeing demand due to better power consumption.
  • Micron is making progress qualifying its memory products with multiple customers. The company recognized its first revenue from HBM3E, which will be part of Nvidia's H200 Tensor Core GPUs, in the fiscal second quarter.
  • Data center SSD revenue hit a record for Micron in calendar 2023.
  • The PC market is expected to grow modestly and accelerate due to AI PC demand, which uses more memory.
  • AI will also drive smartphone memory specs over time.

Also see:

For its second quarter, Micron reported net income of $793 million, or 71 cents a share, on revenue of $5.82 billion, up 58% from a year ago. Non-GAAP earnings were 42 cents a share.

Wall Street was expecting a non-GAAP second quarter loss of 24 cents a share on revenue of $5.35 billion.

As for the outlook, Micron said third quarter revenue will be $6.6 billion give or take $200 million with non-GAAP earnings of about 45 cents a share, give or take 7 cents. Wall Street was expecting third quarter non-GAAP earnings of 9 cents a share on $6 billion in sales.

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Layoffs, DXPs, and Zoho Customer Feedback | ConstellationTV Episode 76

🎬 ConstellationTV episode 76 just dropped! This week, hilarious analyst duo Liz Miller and Holger Mueller unpack enterprise #tech news (Oracle/Microsoft partnerships, impending layoffs and Adobe's new AI assistant).

Then Liz explains why Pantheon Platform made the 2024 #DXP ShortList and Holger hears from Rob O'Brien of ITV Studios about his experience using Zoho #technology. Watch until the end for bloopers!

0:00 - Introduction
1:30 - #Enterprise tech news coverage (partnerships,#layoffs and #AI)
13:49 - Let's Talk About #DXPs with Liz Miller
32:06 - #ZohoDay2024 interview with ITV Studios
42:05 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/GQq9OkVkIys?si=01UtQP45WHdz7MUY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

.Lumen outlines bringing computer vision headset to blind

The best use cases are sometimes so obvious. During Nvidia GTC 2024, Cornel Amariei, CEO of .Lumen walked through a headset for the visually impaired that will scale better than a guide dog using sensors and AI technologies that are used in cars.

"We have today over 300 million people who are visually impaired, and this number is increasing greatly. But if you check what solutions are out there for them, there are only two solutions for their mobility, and they're 1,000s of years old--a guide dog and the white cane," explained Amariei.

Amariei explained how .Lumen's headset includes spatial navigation AI to understand the pedestrian world the same way a self-driving car would. The headset also includes a non-visual feedback interface that uses haptics to guide the blind.

"Rather than pulling your hand as a guide note, we actually pull your head," he explained. "We tested with over 300 blind individuals, and I would argue it's actually more intuitive than a guide dog pulling your hand. It's all possible because of the latest in self-driving, robotics and artificial intelligence powered by Nvidia."

The technology behind the headset includes two RGB cameras, two depth cameras, infrared sensors, and an inertial measurement unit with the ability to use GPS in some use cases. The data is processed in the headset to run machine learning models and computer vision flows.

Amariei added that .Lumen is optimizing for battery life and other features. He said that the headset can be used with a white cane or guide dog as well as by itself. Approval from the Food and Drug Administration is expected next year, and the device will be available in the second half of 2024.

More from Nvidia GTC 2024:

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Technology Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft names Suleyman head of consumer AI, Microsoft AI

Microsoft is shoring up its consumer Copilot efforts with the addition of Mustafa Suleyman and Karén Simonyan to lead a new group called Microsoft AI. Suleyman and Simonyan were two of three co-founders of Inflection.ai.

Suleyman was also Co-founder of Google's DeepMind.

Inflection had a large language model, Inflection 2.5, that was behind the Pi, a personal AI. Pi apparently was more conversational. The addition of Suleyman and Simonyan creates a group solely focused on consumer AI products and research, notably Bing and Edge Copilots. Inflection AI announced $1.3 billion in funding with Microsoft and Nvidia as the lead investors in June. That round was led by Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, and new investor Nvidia. The company at the time said it was building the largest AI cluster with 22,000 Nvidia H100 Tensor Core GPUs.

In a blog post, Microsoft CEO Satya Nadella said:

“Mustafa will be EVP and CEO, Microsoft AI, and joins the senior leadership team (SLT), reporting to me. Karén is joining this group as Chief Scientist, reporting to Mustafa...Several members of the Inflection team have chosen to join Mustafa and Karén at Microsoft. They include some of the most accomplished AI engineers, researchers, and builders in the world. They have designed, led, launched, and co-authored many of the most important contributions in advancing AI over the last five years. I am excited for them to contribute their knowledge, talent, and expertise to our consumer AI research and product making."

Microsoft's consumer generative AI team will report to Mustafa.

Nadella was sure to note that Kevin Scott continues as CTO and EVP of AI and Rajesh Jha remains EVP of Experiences and Devices and in charge of Copilot for Microsoft 365.

A few takeaways:

  • Microsoft was sure to note that "our AI innovation continues to build on our most strategic and important partnership with OpenAI," but it's clear there's some diversification going on with Inflection as well as the Mistral AI partnership.
  • By calling out Scott and Jha, Microsoft is signaling Copilot stability to enterprises.
  • Microsoft has led the generative AI wave for its consumer applications, but hasn't moved the market share needle vs. Google with Edge, Bing and Microsoft Advertising.
  • The stakes for talent are obviously high as Nadella noted that "there is no franchise value in our industry and the work and product innovation we drive at this moment will define the next decade and beyond."
  • Microsoft is doubling down on home grown AI development.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

ServiceNow's Strategic Portfolio Management Gains Generative AI, Many Key Updates

In the latest March, 2024 Vision update delivered today, the team behind ServiceNow's Strategic Portfolio Management (SPM) platform provided a deep dive on the significant advancements within the product recently, which are aimed at empowering organizations to optimize their investment decision-making processes. I'll delve into the most impactful updates from today, with a particular focus on the integration of generative AI, enhanced user experience, and the introduction of collaborative work management features, among a large list of other enhancements that were presented this morning.

It's evident that ServiceNow considers SPM a fundamentally strategic enterprise capability that takes the company well beyond its ITSM roots, helping leaders from CIOs to program managers in delivering on their overarching business and IT objectives across portfolios, programs,and projects. The announcements today continues to expand the platform into broad new aspects and capabilities, as well as thoroughly modernizing it around a comptemporary view of value stream management that span industries, lenses, experience models, and project types.

ServiceNow SPM - One Workspace for All Portfolios
ServiceNow's SPM Is Intended As a One-Stop-Shop for All Strategic Portfolio, Program, and Project Needs

A Journey From IT Projects to All Enterprise Program Portfolios

ServiceNow's Strategic Portfolio Management (SPM) has undergone a significant evolution since its inception in 2016. Originally, the platform focused primarily on IT portfolio management, and was known then as IT Business Management. During this stage, its core functionality centered on managing the lifecycle of IT projects and investments, ensuring alignment with IT strategy and optimizing resource allocation.

However, in 2022, ServiceNow recognized the growing need for a more comprehensive approach to portfolio management. Businesses were increasingly demanding a solution that could manage not just IT projects but the entire program and project portfolio across the organization, encompassing both IT and business initiatives. This shift retained a focus on the growing cross-functional importance of digital transformation, where IT plays a crucial role in supporting and enabling business goals as organizations broadly move into a more tech-driven future.

To address this need, ServiceNow's IT portfolio management solution transformed into the richer and more robust Strategic Portfolio Management (SPM) platform. This evolution expanded the tool's capabilities well beyond IT-centric projects. SPM now caters to managing the entire program and project portfolio, encompassing both IT and business initiatives. This holistic approach ensures full alignment between business strategy and investment decisions, fostering a more strategic and integrated approach to resource allocation.

The Latest Updates on ServiceNow SPM

Yoav Boaz, VP and General Manager of Strategic Portfolio Management (SPM) for ServiceNow, first delivered an update on the product's latest advancements and customer success stories. "We just want to point out the major investments ServiceNow is making within SPM and within our product line. Some of the innovations that we brought to market last year were around product feedback that some of you are already deploying, with benchmarks where you can compare your KPIs to other [ServiceNow] customers' KPIs. to different industry lenses and process optimizations where you can see what the bottlenecks are happening within the demand or ideation, your resource assignment, your resource management, workspaces and so on", said Boaz as he underscored how SPM is a workspace to manage nearly every type of major transformation and value stream within an enterprise today.

Yoav Boaz SPM ServiceNow March 2024 Update
ServiceNow's Yoav Boaz Kicked Off the March 2024 Vision Update for Strategic Portfolio Management

Here's a breakdown of the most salient trends and changes Boaz explored. The latest product Innovations and updates in SPM include: 

  • ServiceNow has continued investing extensively in SPM, with numerous major new features.
  • Benchmarking for comparing KPIs with other customers.
  • Industry-specific lenses for tailored insights.
  • Process optimization to identify bottlenecks in demand management, resource allocation, and workspace utilization.
  • Revamped enterprise HR planning.
  • Scenario planning for evaluating different future possibilities.
  • Collaborative work management for enhanced teamwork.

Adoption of the Pro version of SPM grew by over 50% in the past year, which indicates strong customer engagement with new capabilities. ServiceNow is now positioned as a leader in both Strategic Portfolio Management and Value Stream Management a number of industry reports.

Key customer trends and success stories for SPM:

  • The market is shifting from project-centric to a product value stream approach. Customers are focusing on understanding the interconnectedness of products and their overall value delivery.
  • Organizations are increasingly deploying SPM across the enterprise, not just within specific departments, to align leadership priorities with resource allocation and investment decisions.
  • Generative AI (Gen AI) is generating significant interest among customers, with ServiceNow planning to share its roadmap and initial Gen AI release details in early May.

A number of key SPM customer success stories were highlighted:

  • Western Governors University: Improved demand and value stream management.
  • Juniper Networks: Enhanced demand management.
  • Premise Health: Achieved over $1.7 million in savings through improved project management.
  • MKS (semiconductor manufacturer): Increased project deliveries by 29% and project completion rates by 30% with SPM.
  • Anonymous US government agency: Reduced project management costs by over 15% and saved 67% of time spent on program management using SPM on a $17 billion IT budget.

How ServiceNow Strategic Portfolio Management (SPM) Supports Enterprise Portfolios Including Scaled Agile 

Carina Hatfield, Senior Director of Inbound Product Management, and James Ramsay, Product Management Director of Strategic Portfolio Management/Application Portfolio Management, then explored the roadmap and the expansive new functionalities in ServiceNow's SPM, highlighting its support for Scaled Agile's popular framework. Here are my key takeaways:

Lenses:

  • New "Business Capability Lens" allows organizations to integrate business capability planning with strategic processes.
  • Lenses provide flexibility for planning based on different structures like product, value stream, or digital transformation initiatives.

Requirements Management:

SPM can robustly and versatilely handle various work types, including Scaled Agile work alongside traditional projects and demands.

Value Streams:

  • Value stream view showcases the relationships between epics, products, and supporting technologies.
  • Architectural runway concept helps visualize dependencies between customer value, technical feasibility, and enabling technologies.

Capacity Planning:

This new feature ensures planned work aligns with available team capacity to avoid resource overload.

Enterprise Agile Planning:

  • Supports various Scaled Agile frameworks like SAFe or Scrum of Scrums.
  • Enables configuration of different work types and team structures.

Project Workspace Enhancements:

  • Improved drag-and-drop functionality and template application.
  • Consolidated view of project resources, financials, risks, and issues.

Benchmarking:

  • Compares your SPM usage and KPIs against anonymized data from other users.
  • Allows filtering by industry and size for a more relevant comparison.

Process Mining:

  • Analyzes the flow of work within SPM processes.
  • Identifies bottlenecks and opportunities for improvement, particularly in demand management.
  • Enables creation of improvement initiatives to streamline processes.

Strategic Planning Workspace:

  • Provides real-time visibility into program progress against objectives and key results (OKRs).
  • Allows automated data collection from various sources across the platform.

Focus on Scaled Agile:

  • Throughout the update, the emphasis is on SPM's ability to support Scaled Agile methodologies.
  • Features like Enterprise Agile Planning and Scaled Agile work management within requirements demonstrate this focus.

Overall, these updates enhance ServiceNow SPM's capabilities for organizations adopting Scaled Agile frameworks while still catering to traditional project management needs.

Analysis and Key Takeaways from the SPM March 2024 Vision Update

Yoav Boaz repeatedly emphasized ServiceNow's commitment to SPM innovation and its positive impact on customer success. The focus on product value streams and enterprise-wide deployment reflects evolving market demands for more capable strategic portfolio management solutions. The upcoming Gen AI integration promises further advancements in automating tasks and generating insights. The customer success stories showcase SPM's ability to deliver significant cost savings, improved project delivery rates, and better resource allocation, even if the customer portfolio used did tend to be a bit tech-heavy (that's because non-tech companies tend to have less success with sophisticated PPM solutions like this.)

For obvious reasons, the most exciting and potentially transformative addition to SPM is the incorporation of generative AI. This cutting-edge technology holds the potential to automate repetitive tasks associated with portfolio management, such as data collection, feedback synthesis, and project analysis. Generative AI also generates insightful recommendations, allowing portfolio managers to focus on strategic initiatives and make data-driven decisions with greater efficiency. This not only streamlines the workflow but also empowers portfolio managers to derive superior investment outcomes.

ServiceNow SPM and the Roles of Generative AI
ServiceNow Intends to Have Generative AI Capabilities for a Wide Variety of SPM Roles

Furthermore, SPM clearly strives to deliver a significantly improved user experience. The interface has been redesigned to be more intuitive and user-friendly. This enhanced usability allows users to navigate features and access critical data effortlessly. Improved user experience fosters broader adoption of SPM across an organization, ensuring a wider range of stakeholders can contribute to the strategic portfolio management process. This fosters a more collaborative and informed approach to investment decisions.

But perhaps the most significant update to SPM, however, is the introduction of new collaborative work management features, which I'll be evaluating soon for potential inclusion in my Work Coordination ShortList. This significant feature focuses on more detailed tasking while streamlining teamwork and facilitates automation information sharing and updates amongst portfolio managers and stakeholders on the progress of task execution. By enabling real-time collaboration, SPM ensures everyone involved in the decision-making process has access to the latest information and can contribute effectively. This fosters transparency and promotes a more cohesive approach to actually delivering on the details of portfolio management. In their customer stories, we heard about the struggle of integrating solutions like SmartSheet properly with SPM, so now ServiceNow has their own native solution right within the platform. It's also evident that ServiceNow would also like to capitalize on the success of this burgeoning product category as means of actually executing on the details of business and digital transformation.

ServiceNow SPM Adds Collaboration Work Management/Work Coordination
To Help Deliver On Actually Executing Against Programs/Project, ServiceNow SPM Adds Work Coordination/CWM

Finally, ServiceNow shared the roadmap for the product, emphasizing a) going well beyond IT in support the types of business transformation projects that it can handle, b) an "obession" with SPM value creation, and c) a vital focus on improved ease-of-user through various innovations, including generative AI.

Key Takeaways As SPM Climbs Into the Apex Position in PPM and Digital Transformation

ServiceNow's Strategic Portfolio Management (SPM) is evolving at a blazing pace: It now offers an a truly overarching suite of enterprise-grade capabilities for organizations wishing to master their their entire estate of programs and projects, from business to IT. New features like Scaled Agile support, process mining, and real-time OKR tracking showcase its commitment to the details of empowering strategic decision-making. However, the platforms very complexity now presents a growing challenge. The sheer volume of features and capabilities can be overwhelming, potentially hindering user adoption and ultimately, value creation.

To truly unlock SPM's potential, ServiceNow now needs to prioritize its digital adoption strategy. In my view, this will include fully leveraging its new "Data to Assist" functionality for proactive guidance and a relentless focus on user-friendliness. While advancements like Generative AI are promising, ensuring a smooth and intuitive user experience is crucial, and I'm gratified to see this in their strategic roadmap for 2024. Now the challenge will be to successfully deliver a high rate of new feature uptake that delivers on customer outcomes. As SPM seeks to provide a full range of the latest advancements, it must not leave users behind by neglecting adoption of some of its most impactful new capabilities. Striking a balance between cutting-edge features and user accessibility will be paramount to maximizing the value proposition of this powerful platform, and ensure its customers achieve the strategic value creation that the platform so evidently strives for.

My Related Research

Transforming IT with Unified Software Services: An Evolving Strategy for CIOs

How to Embark on the Transformation of Work with Artificial Intelligence

Unleashed Amsterdam: Atlassian Refines the End-to-End Developer Experience

AWS re:Invent 2023: Perspectives for the CIO

My new IT Strategy Platforms ShortList

My current Digital Transformation Target Platforms ShortList

Private Cloud a Compelling Option for CIOs: Insights from New Research

The Future of Money: Digital Assets in the Cloud

Four Strategic Frameworks for Digital Transformation

The Future of Work in 2030: A Comprehensive Guide of 40+ Trends
I am seeking companies that want to submit their customer stories to support these trends. Please inquire on inclusion to [email protected].

New C-Suite Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity servicenow ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nvidia today all about bigger GPUs; tomorrow it's software, NIM, AI Enterprise

Nvidia's business today is all about the bigger GPUs, liquid cooled systems and hyperscale cloud providers all lining up for generative AI services powered by Blackwell. Years from now, it's just as likely we're going to see Nvidia GTC 2024 as the beginning of the GPU leader's software strategy.

Here's what Nvidia outlined about its software stack, which to date has largely been about creating an ecosystem for developers, supporting the workloads that sell GPUs and developing use cases that look awesome on a keynote stage.

  • Nvidia inference microservices (NIMs). NIMs are pre-trained AI models packaged and optimized to run across the CUDA installed base.
  • NIMs partnerships with SAP, ServiceNow, Cohesity, CrowdStrike, Snowflake, NetApp, Dell, Adobe and a bevy of others.
  • AI Enterprise 5.0, which will include NIMs and capabilities that will speed up development, enable private LLMs and create co-pilots and generative AI applications quickly with API calls.
  • AI Enterprise 5.0 has support from VMware Private AI Foundation as well as Red Hat OpenStack Platform.
  • Nvidia's microservices will be supported on Nvidia-certified systems from Cisco, Dell, HPE and others. HPE will integrate NIM into its HPE AI software.
  • NIM will be available across AWS, Google Cloud, Microsoft Azure and Oracle Cloud marketplaces. Specifically, NIM microservices will be available in Amazon SageMaker, Google Kubernetes Engine and Microsoft Azure AI as well as popular AI frameworks.

When those software announcements are rolled up it's clear Nvidia CEO Jensen Huang is all about enabling enterprise use cases for generative AI. During Huang's keynote at GTC 2024, he acknowledged enterprise difficulty as well as laid out Nvidia's inferencing story.

GTC 2024: Nvidia Huang lays out big picture: Blackwell GPU platform, NVLink Switch Chip, software, genAI, simulation, ecosystem | Nvidia GTC 2024 Is The Davos of AI | Will AI Force Centralized Scarcity Or Create Freedom With Decentralized Abundance? | AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

Huang said:

"There are a whole bunch of models. These models are groundbreaking, but it's hard for companies to use. How would you integrate it into your workflow? How would you package it up and run it? Inference is an extraordinary computational problem. How would you do the optimization for each and every one of these models and put together the computing stack necessary? We're going to invent a new way for you to receive and operate software. This software comes in a digital box, and it's packaged and optimized to run across Nvidia's installed base."

With the packaging, Nvidia is packaging for all the dependencies with versions, models and GPUs and serving it up via APIs. Huang walked through how Nvidia has scaled up chatbots, including one for chip designers that leveraged Llama, internal proprietary language and libraries.

"Inside our company, the vast majority of our data is not in the cloud. It's inside our company. It's been sitting there, being used all the time and it's Nvidia's intelligence," explained Huang. "We would like to take that data, learn its meaning, and then re-index that knowledge into a new type of database called the vector database. And so, you're essentially take structured data or unstructured data, you learn its meaning, you encode its meaning. So now this becomes an AI database and that AI database in the future once you create it, you can talk to it."

Huang then ran a use case of a digital human running NIMs. Enterprises will likely consume NIMs at first via SAP's Joule copilot or ServiceNow's army of virtual assistants. Snowflake will also build out NIMs enabled copilots as will Dell, which is building AI factories based on Nvidia.

The vision here is that NIMs will be hooked up to real-world data sources and continually improve digital twins of factories, warehouses, cities and anything else physical. The physical world will be software defined.

There are a host of GTC sessions outlining NIM as well deployments already in the books that are worth a watch.

The money game

Nvidia GTC 2024 was an inflection point for the company in that the interest in Huang's talk went well beyond the core developer base. Wall Street analysts will quickly pivot to gauging software potential.

On Nvidia's recent earnings conference call, Huang talked about the software business, which is now on a $1 billion run rate. Relative to the GPU growth, Nvidia's software business is an afterthought. I'll wager in 5 years, Nvidia's software business will garner a lot more focus just as Apple's financial results were about services and subscriptions just as much as iPhone sales.

Huang said:

"NVIDIA AI Enterprise is a run time like an operating system, it's an operating system for artificial intelligence.

And we charge $4,500 per GPU per year. My guess is that every enterprise in the world, every software enterprise company that is deploying software in all the clouds and private clouds and on-prem, will run on NVIDIA AI Enterprise, especially for our GPUs. This is likely to be a very significant business over time. We're off to a great start. It's already at $1 billion run rate and we're really just getting started."

    Now that Nvidia has fleshed out its software strategy it's safe to say the business has moved well beyond the starting line.

    More:

    Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    Dell Technologies preps new AI servers with Nvidia’s B100, B200, GB200 SuperChip

    Dell Technologies said it will support Retrieval-Augmented Generation (RAG) with Nvidia systems as it rolled out a set of Dell PowerEdge servers with HGX H200, HGX B100, HGX B200 and GB200 SuperChip.

    The servers, which include Dell's first liquid-cooling system, headlined a slate of additions for the IT giant. Dell's most recent earnings call highlighted strength in AI-optimized systems and a strong backlog. Dell, HPE and SuperMicro have all seen stock gains as AI-optimized systems start to sell well. Dell outlined its additions following Nvidia CEO Jensen Huang's keynote

    In addition, generative AI workloads appear to be moving on-premise due to cost, latency and data security.

    Varun Chhabra, SVP of Dell's Infrastructure Solutions Group and Telecom units, said: "one of the things that has been coming out loud and clear as we talk to customers is bringing GenAI to the enterprise where customer data is continues to be challenging," said Chhabra, who said data silos, governance, compliance and policies are all trouble spots. "We want turnkey solutions that package up storage, compute, networking, GPUs and a software stack that's easy to understand and consume."

    Here's the roundup of what Dell announced in support of Nvidia's GTC launches to cover training to inference.

    • PowerEdge XE9680 with 2x performance with Nvidia HGX H200.
    • PowerEdge XE9680 also has options for next-generation AI acceleration with air-cooled Nvidia HGX B100 and Dell's first liquid-cooled 8-way GPU with Nvidia HGX B200.
    • Dell will support Nvidia's GB200 SuperChip, which will feature real-time inferencing with multi-trillion parameter models, 40x lower total cost of ownership compared to Nvidia 8-way HGX  H100 and 20x better processing performance.
    • Systems will support InfiniBand BlueField3 SuperNIC options as well as Spectrum-X Ethernet AI fabric. Dell has supported InfiniBand, but is adding Spectrum-X to the mix.
    • PowerEdge R760xa will support Nvidia's Omniverse OVX 3.0 platform.
    • Dell's system RAG design will have Nvidia microservices via NEMO and embedding framework in PowerEdge, PowerScale and PowerSwitch gear.
    • PowerScale ethernet storage will have Nvidia DGX SuperPOD validation.
    • Dell Data Lakehouse, which will have an analytics engine powered by Starburst.

    With the Dell Data Lakehouse, Greg Findlen, senior vice president of Dell's infrastructure solutions group (ISG), AI and data management solutions, said the data lakehouse effort is designed to connect clusters on-premises and in the cloud. "We want to make it easy to scale on-premises and control cost," said Findlen. "GenAI bills are getting more expensive, and enterprise want to leverage local processing wherever that is."

    Ihab Tarazi, CTO AI and Compute, Dell ISG, said enterprises have experimented with models in the public cloud, but are looking on-premises for some workloads. The Dell data lakehouse effort also plays into the company's partnership with HuggingFace.

    To tie together the AI-stack and software, Dell has also launched services.

     

    Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity dell nvidia Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

    Nvidia Huang lays out big picture: Blackwell GPU platform, NVLink Switch Chip, software, genAI, simulation, ecosystem

    Nvidia CEO Jensen Huang said "accelerated computing has reached the tipping point" across multiple industries as the company launched new GPUs including a new platform called Blackwell, NVLink Switch Chip, applications and a developer stack that blends virtual simulations, generative AI, robotics and multiple computing fronts.

    Huang laid out Nvidia's lofty goal during his Nvidia GTC 2024 keynote. "The industry of using simulation tools to create products, and it's not about driving down the cost of computing. It's about driving up the scale of computing. We would like to be able to simulate the entire product that we do completely in full fidelity completely digitally. Essentially what we call digital twins. We would like to design it, build it simulated operated completely digitally," said Huang.

    A big theme of Huang's talk was Nvidia as a software provider and ecosystem that sits in the middle of generative AI and multiple key categories. And yes, to do what Nvidia wants it's going to take much bigger GPUs. "We need much, much bigger GPUs. We recognized this early on. And we realized that the answer is to put a whole bunch of GPUs together. And of course, innovate a whole bunch of things along the way," said Huang. "We're trying to help the world build things. And in order to help the world building states we gotta go first. We build the chips, the systems, networking, all of the software necessary to do this."

    Huang laid out more powerful GPUs and systems in a cadence. The upshot is Nvidia wants to network GPUs together so they can operate as one. "In the future, data centers are going to be thought of as an AI factory. An AI factories' goal in life is to generate revenues and intelligence," he said. 

    Nvidia’s GTC conference used to be for developers—and still is by the way—but given the GPU giant’s recent run Wall Street and the tech industry was closely watching Huang’s talk for signs of continuing demand, the roadmap ahead and indicators for generative AI growth. Constellation Research CEO Ray Wang noted that GTC is now the Davos of AI.Also see: Will AI Force Centralized Scarcity Or Create Freedom With Decentralized Abundance? | AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

    There’s a good reason why analysts were closely evaluating everything Huang said. To date, most of the spoils from the generative AI boom have gone to Nvidia with SuperMicro being an exception. Huang knew about the newfound interest in GTC. He took the stage and joked to the audience that he hoped they realized they weren’t at a concert. Huang warned that folks would here a lot of science and wonky topics.

    Nvidia’s Blackwell platform is an AI superchip with 208 billion transistors, second generation transformer engine, 5th generation Nvlink that scales to 576 GPUs and other features.

    The game here is to build up to full data centers—powered by Nvidia of course.

    Key items about the Blackwell platform and supporting cast.

    • Blackwell Compute note has two Grace CPUs and four Blackwell GPUs.
    • 880 petaFLOPs of AI performance.
    • 32TB/s of memory bandwidth.
    • Liquid cooled MGX design.
    • Blackwell is ramping to launch with cloud service providers including AWS, which will leverage CUDA for Sagemaker and Bedrock. Google Cloud will use Blackwell as will Oracle and Microsoft. 

    "Blackwell will be the most successful product launch in our history," said Huang. 

    Other launches include the GB200 Grace Blackwell Superchip with 864GB of fast memory, 40 petaFLOSs of AI performance. The Blackwell GPU has 20 petaFLOPs of AI performance. Relative to Nvidia’s Hopper, Blackwell has 5x the AI performance and 4x the on-die memory.

    To complement those processors, Nvidia has a stack of networking enhancements, accelerations and models to synchronize and have GPUs work together.

    Huang said:

    “We have to synchronize and update each other. And every so often, we have to reduce the partial products and then rebroadcast out the partial products that sum of the partial products back to everybody else. So, there's a lot of what is called all reduce and all the all in all gathers. It's all part of this area of synchronization and collectives so that we can have GPUs working with each other. Having extraordinarily fast lakes and being able to do mathematics right in the network allows us to essentially amplify even further.”

    That amplification will come via the NVLink Switch chip that will have 50 billion and bandwidth to make GPUs connect and operate as one. 

    Huang made a few jokes about pricing but did note that the company is focusing on the quality of service and balance cost of tokens. Nvidia outlined inference vs. training and data center throughput. Huang argued that Nvidia’s software stack, led by CUDA, can optimize model inference and training. Blackwell has optimization built in and “the inference capability of Blackwell is off the charts.”

    Other items:

    • Nvidia said it will also package software by workloads and purpose. Nvidia Inferencing Micro Service (NIMS) will aim to assemble chatbots in an optimized fashion without starting from scratch. Huang said these micro services can hand off to enterprise software platforms such as SAP and ServiceNow and optimize AI applications.
    • Nvidia is working with semiconductor design partners to create new processors, digital twins connected to Omniverse. Ansys is reengineering its stack and ecosystem on Nvidia's CUDA. Nvidia will "CUDA accelerate" Synopsys. Cadence is also partnering with Nvidia.
    • "We need even larger models. We're going to train it with multimodality data, not just text on the internet, but we're going to train it on texts and images and graphs and charts."
    • Huang said he even simulated his keynote. "I hope it's going to turn out as well as I head into my head."
    • Nvidia press releases on switches, Blackwell SuperPod, Blackwell, AWS partnership, Earth Climate Digital Twin, partnership with Microsoft and more.

    Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Big Data Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer