Results

ServiceNow's Strategic Portfolio Management Gains Generative AI, Many Key Updates

ServiceNow's Strategic Portfolio Management Gains Generative AI, Many Key Updates

In the latest March, 2024 Vision update delivered today, the team behind ServiceNow's Strategic Portfolio Management (SPM) platform provided a deep dive on the significant advancements within the product recently, which are aimed at empowering organizations to optimize their investment decision-making processes. I'll delve into the most impactful updates from today, with a particular focus on the integration of generative AI, enhanced user experience, and the introduction of collaborative work management features, among a large list of other enhancements that were presented this morning.

It's evident that ServiceNow considers SPM a fundamentally strategic enterprise capability that takes the company well beyond its ITSM roots, helping leaders from CIOs to program managers in delivering on their overarching business and IT objectives across portfolios, programs,and projects. The announcements today continues to expand the platform into broad new aspects and capabilities, as well as thoroughly modernizing it around a comptemporary view of value stream management that span industries, lenses, experience models, and project types.

ServiceNow SPM - One Workspace for All Portfolios
ServiceNow's SPM Is Intended As a One-Stop-Shop for All Strategic Portfolio, Program, and Project Needs

A Journey From IT Projects to All Enterprise Program Portfolios

ServiceNow's Strategic Portfolio Management (SPM) has undergone a significant evolution since its inception in 2016. Originally, the platform focused primarily on IT portfolio management, and was known then as IT Business Management. During this stage, its core functionality centered on managing the lifecycle of IT projects and investments, ensuring alignment with IT strategy and optimizing resource allocation.

However, in 2022, ServiceNow recognized the growing need for a more comprehensive approach to portfolio management. Businesses were increasingly demanding a solution that could manage not just IT projects but the entire program and project portfolio across the organization, encompassing both IT and business initiatives. This shift retained a focus on the growing cross-functional importance of digital transformation, where IT plays a crucial role in supporting and enabling business goals as organizations broadly move into a more tech-driven future.

To address this need, ServiceNow's IT portfolio management solution transformed into the richer and more robust Strategic Portfolio Management (SPM) platform. This evolution expanded the tool's capabilities well beyond IT-centric projects. SPM now caters to managing the entire program and project portfolio, encompassing both IT and business initiatives. This holistic approach ensures full alignment between business strategy and investment decisions, fostering a more strategic and integrated approach to resource allocation.

The Latest Updates on ServiceNow SPM

Yoav Boaz, VP and General Manager of Strategic Portfolio Management (SPM) for ServiceNow, first delivered an update on the product's latest advancements and customer success stories. "We just want to point out the major investments ServiceNow is making within SPM and within our product line. Some of the innovations that we brought to market last year were around product feedback that some of you are already deploying, with benchmarks where you can compare your KPIs to other [ServiceNow] customers' KPIs. to different industry lenses and process optimizations where you can see what the bottlenecks are happening within the demand or ideation, your resource assignment, your resource management, workspaces and so on", said Boaz as he underscored how SPM is a workspace to manage nearly every type of major transformation and value stream within an enterprise today.

Yoav Boaz SPM ServiceNow March 2024 Update
ServiceNow's Yoav Boaz Kicked Off the March 2024 Vision Update for Strategic Portfolio Management

Here's a breakdown of the most salient trends and changes Boaz explored. The latest product Innovations and updates in SPM include: 

  • ServiceNow has continued investing extensively in SPM, with numerous major new features.
  • Benchmarking for comparing KPIs with other customers.
  • Industry-specific lenses for tailored insights.
  • Process optimization to identify bottlenecks in demand management, resource allocation, and workspace utilization.
  • Revamped enterprise HR planning.
  • Scenario planning for evaluating different future possibilities.
  • Collaborative work management for enhanced teamwork.

Adoption of the Pro version of SPM grew by over 50% in the past year, which indicates strong customer engagement with new capabilities. ServiceNow is now positioned as a leader in both Strategic Portfolio Management and Value Stream Management a number of industry reports.

Key customer trends and success stories for SPM:

  • The market is shifting from project-centric to a product value stream approach. Customers are focusing on understanding the interconnectedness of products and their overall value delivery.
  • Organizations are increasingly deploying SPM across the enterprise, not just within specific departments, to align leadership priorities with resource allocation and investment decisions.
  • Generative AI (Gen AI) is generating significant interest among customers, with ServiceNow planning to share its roadmap and initial Gen AI release details in early May.

A number of key SPM customer success stories were highlighted:

  • Western Governors University: Improved demand and value stream management.
  • Juniper Networks: Enhanced demand management.
  • Premise Health: Achieved over $1.7 million in savings through improved project management.
  • MKS (semiconductor manufacturer): Increased project deliveries by 29% and project completion rates by 30% with SPM.
  • Anonymous US government agency: Reduced project management costs by over 15% and saved 67% of time spent on program management using SPM on a $17 billion IT budget.

How ServiceNow Strategic Portfolio Management (SPM) Supports Enterprise Portfolios Including Scaled Agile 

Carina Hatfield, Senior Director of Inbound Product Management, and James Ramsay, Product Management Director of Strategic Portfolio Management/Application Portfolio Management, then explored the roadmap and the expansive new functionalities in ServiceNow's SPM, highlighting its support for Scaled Agile's popular framework. Here are my key takeaways:

Lenses:

  • New "Business Capability Lens" allows organizations to integrate business capability planning with strategic processes.
  • Lenses provide flexibility for planning based on different structures like product, value stream, or digital transformation initiatives.

Requirements Management:

SPM can robustly and versatilely handle various work types, including Scaled Agile work alongside traditional projects and demands.

Value Streams:

  • Value stream view showcases the relationships between epics, products, and supporting technologies.
  • Architectural runway concept helps visualize dependencies between customer value, technical feasibility, and enabling technologies.

Capacity Planning:

This new feature ensures planned work aligns with available team capacity to avoid resource overload.

Enterprise Agile Planning:

  • Supports various Scaled Agile frameworks like SAFe or Scrum of Scrums.
  • Enables configuration of different work types and team structures.

Project Workspace Enhancements:

  • Improved drag-and-drop functionality and template application.
  • Consolidated view of project resources, financials, risks, and issues.

Benchmarking:

  • Compares your SPM usage and KPIs against anonymized data from other users.
  • Allows filtering by industry and size for a more relevant comparison.

Process Mining:

  • Analyzes the flow of work within SPM processes.
  • Identifies bottlenecks and opportunities for improvement, particularly in demand management.
  • Enables creation of improvement initiatives to streamline processes.

Strategic Planning Workspace:

  • Provides real-time visibility into program progress against objectives and key results (OKRs).
  • Allows automated data collection from various sources across the platform.

Focus on Scaled Agile:

  • Throughout the update, the emphasis is on SPM's ability to support Scaled Agile methodologies.
  • Features like Enterprise Agile Planning and Scaled Agile work management within requirements demonstrate this focus.

Overall, these updates enhance ServiceNow SPM's capabilities for organizations adopting Scaled Agile frameworks while still catering to traditional project management needs.

Analysis and Key Takeaways from the SPM March 2024 Vision Update

Yoav Boaz repeatedly emphasized ServiceNow's commitment to SPM innovation and its positive impact on customer success. The focus on product value streams and enterprise-wide deployment reflects evolving market demands for more capable strategic portfolio management solutions. The upcoming Gen AI integration promises further advancements in automating tasks and generating insights. The customer success stories showcase SPM's ability to deliver significant cost savings, improved project delivery rates, and better resource allocation, even if the customer portfolio used did tend to be a bit tech-heavy (that's because non-tech companies tend to have less success with sophisticated PPM solutions like this.)

For obvious reasons, the most exciting and potentially transformative addition to SPM is the incorporation of generative AI. This cutting-edge technology holds the potential to automate repetitive tasks associated with portfolio management, such as data collection, feedback synthesis, and project analysis. Generative AI also generates insightful recommendations, allowing portfolio managers to focus on strategic initiatives and make data-driven decisions with greater efficiency. This not only streamlines the workflow but also empowers portfolio managers to derive superior investment outcomes.

ServiceNow SPM and the Roles of Generative AI
ServiceNow Intends to Have Generative AI Capabilities for a Wide Variety of SPM Roles

Furthermore, SPM clearly strives to deliver a significantly improved user experience. The interface has been redesigned to be more intuitive and user-friendly. This enhanced usability allows users to navigate features and access critical data effortlessly. Improved user experience fosters broader adoption of SPM across an organization, ensuring a wider range of stakeholders can contribute to the strategic portfolio management process. This fosters a more collaborative and informed approach to investment decisions.

But perhaps the most significant update to SPM, however, is the introduction of new collaborative work management features, which I'll be evaluating soon for potential inclusion in my Work Coordination ShortList. This significant feature focuses on more detailed tasking while streamlining teamwork and facilitates automation information sharing and updates amongst portfolio managers and stakeholders on the progress of task execution. By enabling real-time collaboration, SPM ensures everyone involved in the decision-making process has access to the latest information and can contribute effectively. This fosters transparency and promotes a more cohesive approach to actually delivering on the details of portfolio management. In their customer stories, we heard about the struggle of integrating solutions like SmartSheet properly with SPM, so now ServiceNow has their own native solution right within the platform. It's also evident that ServiceNow would also like to capitalize on the success of this burgeoning product category as means of actually executing on the details of business and digital transformation.

ServiceNow SPM Adds Collaboration Work Management/Work Coordination
To Help Deliver On Actually Executing Against Programs/Project, ServiceNow SPM Adds Work Coordination/CWM

Finally, ServiceNow shared the roadmap for the product, emphasizing a) going well beyond IT in support the types of business transformation projects that it can handle, b) an "obession" with SPM value creation, and c) a vital focus on improved ease-of-user through various innovations, including generative AI.

Key Takeaways As SPM Climbs Into the Apex Position in PPM and Digital Transformation

ServiceNow's Strategic Portfolio Management (SPM) is evolving at a blazing pace: It now offers an a truly overarching suite of enterprise-grade capabilities for organizations wishing to master their their entire estate of programs and projects, from business to IT. New features like Scaled Agile support, process mining, and real-time OKR tracking showcase its commitment to the details of empowering strategic decision-making. However, the platforms very complexity now presents a growing challenge. The sheer volume of features and capabilities can be overwhelming, potentially hindering user adoption and ultimately, value creation.

To truly unlock SPM's potential, ServiceNow now needs to prioritize its digital adoption strategy. In my view, this will include fully leveraging its new "Data to Assist" functionality for proactive guidance and a relentless focus on user-friendliness. While advancements like Generative AI are promising, ensuring a smooth and intuitive user experience is crucial, and I'm gratified to see this in their strategic roadmap for 2024. Now the challenge will be to successfully deliver a high rate of new feature uptake that delivers on customer outcomes. As SPM seeks to provide a full range of the latest advancements, it must not leave users behind by neglecting adoption of some of its most impactful new capabilities. Striking a balance between cutting-edge features and user accessibility will be paramount to maximizing the value proposition of this powerful platform, and ensure its customers achieve the strategic value creation that the platform so evidently strives for.

My Related Research

Transforming IT with Unified Software Services: An Evolving Strategy for CIOs

How to Embark on the Transformation of Work with Artificial Intelligence

Unleashed Amsterdam: Atlassian Refines the End-to-End Developer Experience

AWS re:Invent 2023: Perspectives for the CIO

My new IT Strategy Platforms ShortList

My current Digital Transformation Target Platforms ShortList

Private Cloud a Compelling Option for CIOs: Insights from New Research

The Future of Money: Digital Assets in the Cloud

Four Strategic Frameworks for Digital Transformation

The Future of Work in 2030: A Comprehensive Guide of 40+ Trends
I am seeking companies that want to submit their customer stories to support these trends. Please inquire on inclusion to [email protected].

New C-Suite Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity servicenow ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nvidia today all about bigger GPUs; tomorrow it's software, NIM, AI Enterprise

Nvidia today all about bigger GPUs; tomorrow it's software, NIM, AI Enterprise

Nvidia's business today is all about the bigger GPUs, liquid cooled systems and hyperscale cloud providers all lining up for generative AI services powered by Blackwell. Years from now, it's just as likely we're going to see Nvidia GTC 2024 as the beginning of the GPU leader's software strategy.

Here's what Nvidia outlined about its software stack, which to date has largely been about creating an ecosystem for developers, supporting the workloads that sell GPUs and developing use cases that look awesome on a keynote stage.

  • Nvidia inference microservices (NIMs). NIMs are pre-trained AI models packaged and optimized to run across the CUDA installed base.
  • NIMs partnerships with SAP, ServiceNow, Cohesity, CrowdStrike, Snowflake, NetApp, Dell, Adobe and a bevy of others.
  • AI Enterprise 5.0, which will include NIMs and capabilities that will speed up development, enable private LLMs and create co-pilots and generative AI applications quickly with API calls.
  • AI Enterprise 5.0 has support from VMware Private AI Foundation as well as Red Hat OpenStack Platform.
  • Nvidia's microservices will be supported on Nvidia-certified systems from Cisco, Dell, HPE and others. HPE will integrate NIM into its HPE AI software.
  • NIM will be available across AWS, Google Cloud, Microsoft Azure and Oracle Cloud marketplaces. Specifically, NIM microservices will be available in Amazon SageMaker, Google Kubernetes Engine and Microsoft Azure AI as well as popular AI frameworks.

When those software announcements are rolled up it's clear Nvidia CEO Jensen Huang is all about enabling enterprise use cases for generative AI. During Huang's keynote at GTC 2024, he acknowledged enterprise difficulty as well as laid out Nvidia's inferencing story.

GTC 2024: Nvidia Huang lays out big picture: Blackwell GPU platform, NVLink Switch Chip, software, genAI, simulation, ecosystem | Nvidia GTC 2024 Is The Davos of AI | Will AI Force Centralized Scarcity Or Create Freedom With Decentralized Abundance? | AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

Huang said:

"There are a whole bunch of models. These models are groundbreaking, but it's hard for companies to use. How would you integrate it into your workflow? How would you package it up and run it? Inference is an extraordinary computational problem. How would you do the optimization for each and every one of these models and put together the computing stack necessary? We're going to invent a new way for you to receive and operate software. This software comes in a digital box, and it's packaged and optimized to run across Nvidia's installed base."

With the packaging, Nvidia is packaging for all the dependencies with versions, models and GPUs and serving it up via APIs. Huang walked through how Nvidia has scaled up chatbots, including one for chip designers that leveraged Llama, internal proprietary language and libraries.

"Inside our company, the vast majority of our data is not in the cloud. It's inside our company. It's been sitting there, being used all the time and it's Nvidia's intelligence," explained Huang. "We would like to take that data, learn its meaning, and then re-index that knowledge into a new type of database called the vector database. And so, you're essentially take structured data or unstructured data, you learn its meaning, you encode its meaning. So now this becomes an AI database and that AI database in the future once you create it, you can talk to it."

Huang then ran a use case of a digital human running NIMs. Enterprises will likely consume NIMs at first via SAP's Joule copilot or ServiceNow's army of virtual assistants. Snowflake will also build out NIMs enabled copilots as will Dell, which is building AI factories based on Nvidia.

The vision here is that NIMs will be hooked up to real-world data sources and continually improve digital twins of factories, warehouses, cities and anything else physical. The physical world will be software defined.

There are a host of GTC sessions outlining NIM as well deployments already in the books that are worth a watch.

The money game

Nvidia GTC 2024 was an inflection point for the company in that the interest in Huang's talk went well beyond the core developer base. Wall Street analysts will quickly pivot to gauging software potential.

On Nvidia's recent earnings conference call, Huang talked about the software business, which is now on a $1 billion run rate. Relative to the GPU growth, Nvidia's software business is an afterthought. I'll wager in 5 years, Nvidia's software business will garner a lot more focus just as Apple's financial results were about services and subscriptions just as much as iPhone sales.

Huang said:

"NVIDIA AI Enterprise is a run time like an operating system, it's an operating system for artificial intelligence.

And we charge $4,500 per GPU per year. My guess is that every enterprise in the world, every software enterprise company that is deploying software in all the clouds and private clouds and on-prem, will run on NVIDIA AI Enterprise, especially for our GPUs. This is likely to be a very significant business over time. We're off to a great start. It's already at $1 billion run rate and we're really just getting started."

    Now that Nvidia has fleshed out its software strategy it's safe to say the business has moved well beyond the starting line.

    More:

    Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    Dell Technologies preps new AI servers with Nvidia’s B100, B200, GB200 SuperChip

    Dell Technologies preps new AI servers with Nvidia’s B100, B200, GB200 SuperChip

    Dell Technologies said it will support Retrieval-Augmented Generation (RAG) with Nvidia systems as it rolled out a set of Dell PowerEdge servers with HGX H200, HGX B100, HGX B200 and GB200 SuperChip.

    The servers, which include Dell's first liquid-cooling system, headlined a slate of additions for the IT giant. Dell's most recent earnings call highlighted strength in AI-optimized systems and a strong backlog. Dell, HPE and SuperMicro have all seen stock gains as AI-optimized systems start to sell well. Dell outlined its additions following Nvidia CEO Jensen Huang's keynote

    In addition, generative AI workloads appear to be moving on-premise due to cost, latency and data security.

    Varun Chhabra, SVP of Dell's Infrastructure Solutions Group and Telecom units, said: "one of the things that has been coming out loud and clear as we talk to customers is bringing GenAI to the enterprise where customer data is continues to be challenging," said Chhabra, who said data silos, governance, compliance and policies are all trouble spots. "We want turnkey solutions that package up storage, compute, networking, GPUs and a software stack that's easy to understand and consume."

    Here's the roundup of what Dell announced in support of Nvidia's GTC launches to cover training to inference.

    • PowerEdge XE9680 with 2x performance with Nvidia HGX H200.
    • PowerEdge XE9680 also has options for next-generation AI acceleration with air-cooled Nvidia HGX B100 and Dell's first liquid-cooled 8-way GPU with Nvidia HGX B200.
    • Dell will support Nvidia's GB200 SuperChip, which will feature real-time inferencing with multi-trillion parameter models, 40x lower total cost of ownership compared to Nvidia 8-way HGX  H100 and 20x better processing performance.
    • Systems will support InfiniBand BlueField3 SuperNIC options as well as Spectrum-X Ethernet AI fabric. Dell has supported InfiniBand, but is adding Spectrum-X to the mix.
    • PowerEdge R760xa will support Nvidia's Omniverse OVX 3.0 platform.
    • Dell's system RAG design will have Nvidia microservices via NEMO and embedding framework in PowerEdge, PowerScale and PowerSwitch gear.
    • PowerScale ethernet storage will have Nvidia DGX SuperPOD validation.
    • Dell Data Lakehouse, which will have an analytics engine powered by Starburst.

    With the Dell Data Lakehouse, Greg Findlen, senior vice president of Dell's infrastructure solutions group (ISG), AI and data management solutions, said the data lakehouse effort is designed to connect clusters on-premises and in the cloud. "We want to make it easy to scale on-premises and control cost," said Findlen. "GenAI bills are getting more expensive, and enterprise want to leverage local processing wherever that is."

    Ihab Tarazi, CTO AI and Compute, Dell ISG, said enterprises have experimented with models in the public cloud, but are looking on-premises for some workloads. The Dell data lakehouse effort also plays into the company's partnership with HuggingFace.

    To tie together the AI-stack and software, Dell has also launched services.

     

    Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity dell nvidia Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

    Nvidia Huang lays out big picture: Blackwell GPU platform, NVLink Switch Chip, software, genAI, simulation, ecosystem

    Nvidia Huang lays out big picture: Blackwell GPU platform, NVLink Switch Chip, software, genAI, simulation, ecosystem

    Nvidia CEO Jensen Huang said "accelerated computing has reached the tipping point" across multiple industries as the company launched new GPUs including a new platform called Blackwell, NVLink Switch Chip, applications and a developer stack that blends virtual simulations, generative AI, robotics and multiple computing fronts.

    Huang laid out Nvidia's lofty goal during his Nvidia GTC 2024 keynote. "The industry of using simulation tools to create products, and it's not about driving down the cost of computing. It's about driving up the scale of computing. We would like to be able to simulate the entire product that we do completely in full fidelity completely digitally. Essentially what we call digital twins. We would like to design it, build it simulated operated completely digitally," said Huang.

    A big theme of Huang's talk was Nvidia as a software provider and ecosystem that sits in the middle of generative AI and multiple key categories. And yes, to do what Nvidia wants it's going to take much bigger GPUs. "We need much, much bigger GPUs. We recognized this early on. And we realized that the answer is to put a whole bunch of GPUs together. And of course, innovate a whole bunch of things along the way," said Huang. "We're trying to help the world build things. And in order to help the world building states we gotta go first. We build the chips, the systems, networking, all of the software necessary to do this."

    Huang laid out more powerful GPUs and systems in a cadence. The upshot is Nvidia wants to network GPUs together so they can operate as one. "In the future, data centers are going to be thought of as an AI factory. An AI factories' goal in life is to generate revenues and intelligence," he said. 

    Nvidia’s GTC conference used to be for developers—and still is by the way—but given the GPU giant’s recent run Wall Street and the tech industry was closely watching Huang’s talk for signs of continuing demand, the roadmap ahead and indicators for generative AI growth. Constellation Research CEO Ray Wang noted that GTC is now the Davos of AI.Also see: Will AI Force Centralized Scarcity Or Create Freedom With Decentralized Abundance? | AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

    There’s a good reason why analysts were closely evaluating everything Huang said. To date, most of the spoils from the generative AI boom have gone to Nvidia with SuperMicro being an exception. Huang knew about the newfound interest in GTC. He took the stage and joked to the audience that he hoped they realized they weren’t at a concert. Huang warned that folks would here a lot of science and wonky topics.

    Nvidia’s Blackwell platform is an AI superchip with 208 billion transistors, second generation transformer engine, 5th generation Nvlink that scales to 576 GPUs and other features.

    The game here is to build up to full data centers—powered by Nvidia of course.

    Key items about the Blackwell platform and supporting cast.

    • Blackwell Compute note has two Grace CPUs and four Blackwell GPUs.
    • 880 petaFLOPs of AI performance.
    • 32TB/s of memory bandwidth.
    • Liquid cooled MGX design.
    • Blackwell is ramping to launch with cloud service providers including AWS, which will leverage CUDA for Sagemaker and Bedrock. Google Cloud will use Blackwell as will Oracle and Microsoft. 

    "Blackwell will be the most successful product launch in our history," said Huang. 

    Other launches include the GB200 Grace Blackwell Superchip with 864GB of fast memory, 40 petaFLOSs of AI performance. The Blackwell GPU has 20 petaFLOPs of AI performance. Relative to Nvidia’s Hopper, Blackwell has 5x the AI performance and 4x the on-die memory.

    To complement those processors, Nvidia has a stack of networking enhancements, accelerations and models to synchronize and have GPUs work together.

    Huang said:

    “We have to synchronize and update each other. And every so often, we have to reduce the partial products and then rebroadcast out the partial products that sum of the partial products back to everybody else. So, there's a lot of what is called all reduce and all the all in all gathers. It's all part of this area of synchronization and collectives so that we can have GPUs working with each other. Having extraordinarily fast lakes and being able to do mathematics right in the network allows us to essentially amplify even further.”

    That amplification will come via the NVLink Switch chip that will have 50 billion and bandwidth to make GPUs connect and operate as one. 

    Huang made a few jokes about pricing but did note that the company is focusing on the quality of service and balance cost of tokens. Nvidia outlined inference vs. training and data center throughput. Huang argued that Nvidia’s software stack, led by CUDA, can optimize model inference and training. Blackwell has optimization built in and “the inference capability of Blackwell is off the charts.”

    Other items:

    • Nvidia said it will also package software by workloads and purpose. Nvidia Inferencing Micro Service (NIMS) will aim to assemble chatbots in an optimized fashion without starting from scratch. Huang said these micro services can hand off to enterprise software platforms such as SAP and ServiceNow and optimize AI applications.
    • Nvidia is working with semiconductor design partners to create new processors, digital twins connected to Omniverse. Ansys is reengineering its stack and ecosystem on Nvidia's CUDA. Nvidia will "CUDA accelerate" Synopsys. Cadence is also partnering with Nvidia.
    • "We need even larger models. We're going to train it with multimodality data, not just text on the internet, but we're going to train it on texts and images and graphs and charts."
    • Huang said he even simulated his keynote. "I hope it's going to turn out as well as I head into my head."
    • Nvidia press releases on switches, Blackwell SuperPod, Blackwell, AWS partnership, Earth Climate Digital Twin, partnership with Microsoft and more.

    Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Big Data Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    ServiceNow adds more tuck-in acquisitions to build out industry, operational technology offerings

    ServiceNow adds more tuck-in acquisitions to build out industry, operational technology offerings

    ServiceNow continues to go shopping. the company said it has acquired 4Industry, a Netherlands company and partner focused on manufacturing, and Smart Daily Management, a connected worker application from EY.

    The most recent deals bolster ServiceNow's current portfolio of operational technology offerings that are focused on manufacturing, energy and transportation and logistics.

    In February, ServiceNow said it acquired Atrinet's NetACE technology to better focus on telecommunications companies. In December, ServiceNow acquired UltimateSuite for task mining tools.

    ServiceNow's broader strategy with these deals is to take apps that are critical to industries and replatform them on its Now platform. For instance, UltimateSuite will be added to its platform. Atrinet NetACE will be replatformed to ServiceNow. And 4Industry, which provides an app with tasks, knowledge base and data for field workers, was already on ServiceNow's platform.

    By adding these smaller acquisitions, ServiceNow can integrate them and then launch new products. 4Industry and the Smart Daily Management application will be used to build a new Connected Worker service on the ServiceNow platform in 2025.

    Terms of the aforementioned deals weren't disclosed.

    Data to Decisions Future of Work servicenow Chief Information Officer

    Cisco closes Splunk purchase, previews integrations ahead

    Cisco closes Splunk purchase, previews integrations ahead

    Cisco closed its $28 billion purchase of Splunk and outlined what'll be a steady drumbeat of product integrations in the months ahead.

    In a blog post outlining what's next for customers, Cisco CEO Chuck Robbins and Splunk Executive Vice President and General Manager Gary Steele said the unified company will be heavily focused on security and observability.

    With the deal closed, Cisco becomes one of the largest enterprise software companies and retools its business mix. Here's how the product integrations will play out:

    • Cisco's Talos threat intelligence will be embedded into Splunk's cybersecurity offerings.
    • Cisco and Splunk will unify AI assistants for security so there's a common experience across the combined portfolio.
    • Splunk's SIEM and SOAR platforms will leverage Cisco's cloud, network and endpoint analytics.
    • The companies will combine for a full-stack observability platform to work across clouds. The integration will start with a common experience and workflow optimizations across Cisco and Splunk observability offerings.
    • Over time, the combined Cisco and Splunk observability products will include AI-driven root cause enhancements and assistants including Splunk IT Service Intelligence.

    Splunk and Cisco will also combine data for networking and AI deployments. More details about the Cisco integration of Splunk will land in June. Cisco Live is June 2-4 and Splunk's .conf24 is June 11-14.

    Data to Decisions Digital Safety, Privacy & Cybersecurity Tech Optimization Innovation & Product-led Growth Future of Work Splunk cisco systems Chief Information Officer Chief Information Security Officer

    New Analysis: Nvidia GTC 2024 Is The Davos of AI

    New Analysis: Nvidia GTC 2024 Is The Davos of AI

    Media Name: @rwang0 @cnbc @squawkbox @BeckyQuick @nvidiia #GTC24 shot.png

    New Chips, New Business Models Ahead In Accelerated Computing

    Nvidia stock is up 267% YoY.  With over 10,000 people in physical attendance and up to 100,000 people virtually attending, this could be the largest AI conference. All eyes on the next wave for AI with CEO Jensen Huang ready to keynote at 1:00 pm PT at the SAP Center in San Jose.  The key things to look for in the keynote:

    • New B100 Chip replacement.  Built by TSMC, the new chips are expected to be advanced 3nm design.  NVIDIA will take advantage of the two dieTSMC's CoWoS-L (Chip-on-Wafer-On-Substrate-L) advanced 2.5D packaging technology to get larger processors.  Should Nvidia move from the monolithic design to a multi-chip module (MCM) approach, the chip maker could create faster derivatives and shorten its product cycle.
    • CUDA advancements. This is the software layer that allows software companies to harness the power of the GPUs
    • Key vertical expansion.  Healthcare, pharma, defense, and public sector consume massive workloads.
    • Partnerships across the AI ecosystem. Expect announcements from partners such as OpenAI, Microsoft, Amazon, and Google, Meta, Micron, Oracle, Super Micro, Dell, Intel, and more.
    • Sovereign AI growth.  Nvidia is focusing beyond data centers and serving governments for sovereign AI.

    Chips Are The Foundational First Inning Of A Nine Inning Age Of AI

    The winners of the internet weren’t the early infrastructure companies They were the companies that built their business model on the internet as a key distribution channel.  In AI, it will be the companies that take advantage of the full-stack of AI.  Software companies will drive the second inning.  But the third and fourth innings will be the innovative companies who use AI as their core business model.  For example, in the war for refrigeration, it wasn't the appliance manufacturers that won, it was actually Coca Cola that innovated a new market for cold beverages.

     

     

     

    Your POV

    Will you be ready for Nvidia's new AI chip? Can competitors catch up?

    Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

    • Developing your metaverse and digital business strategy
    • Connecting with other pioneers
    • Sharing best practices
    • Vendor selection
    • Implementation partner selection
    • Providing contract negotiations and software licensing support
    • Demystifying software licensing

    Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

    Disclosures

    Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

    Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.

    Copyright © 2001 – 2024 R Wang and Insider Associates, LLC All rights reserved.

    Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network

     

     

     

     

     

     

     

    Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth intel AMD meta openai dell Insider Associates nvidia Google SoftwareInsider amazon Oracle Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR AR Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Revenue Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer Chief AI Officer Chief Product Officer Chief Experience Officer

    Multi-cloud computing isn't 'a bunch of separate clouds'

    Multi-cloud computing isn't 'a bunch of separate clouds'

    This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

    Oracle CTO Larry Ellison can play multiple roles: Enterprise technology troll, provocateur, visionary and swashbuckling billionaire who will make big bets even when believers are few and far between.

    Given that backdrop, Oracle's earnings conference calls are usually worth a listen. Ellison, along with Oracle CEO Safra Catz, did a victory lap as infrastructure-as-a-service revenue was up 49% in the third quarter. Oracle Cloud Infrastructure (OCI) has GPUs from Nvidia and is only lacking capacity to grow faster. To that end, Oracle is building data centers as fast as it can.

    Many of those data centers are being built to run OCI Database Services within Microsoft Azure data centers. Oracle is also building out OCI data centers for countries that want to keep their data and large language models (LLMs) in country.

    Toward the end of Oracle's conference call, Ellison dropped some knowledge on how multi-cloud environments should work. Apparently, Oracle Database@Azure can be a model for other hyperscalers. I chuckled at the idea since the only reason the Oracle-Microsoft deal works is because both have mutual enemies in Amazon Web Services and Google Cloud and mutual customers.

    Ellison's take on multi-cloud is that Oracle's Autonomous Database would be the reason multiple hyperscalers would partner with the company. Ellison said (emphasis mine):

    "We expect the multi-cloud initiative to continue to expand amongst other hyperscalers where we build OCI regions inside of and coexisting with their existing cloud infrastructure. We think the era of walled gardens is coming to an end. What customers really want is the ability to use multiple clouds to talk to one another. It is really called cloud computing. It's not called a bunch of separate clouds. We expect multi-cloud to become the norm and Oracle DB to be available everywhere. We think that will preserve our franchise in database because the autonomous database is a unique piece of technology, and there's nothing like it in the world. No one else is working on anything like that. No one else is even trying to duplicate the autonomous database. We think it will be it will become a very successful product. In every cloud."

    A few takeaways from Ellison's comments.

    • Walled gardens do need to end, but that doesn't mean they will.
    • Interoperability between all the hyperscalers would be swell.
    • Interoperability would also help blend private cloud infrastructure, which will play a role in AI workloads. Constellation Research analyst Dion Hinchcliffe has argued that AI workloads will fundamentally change cloud economic models.
    • Oracle can prod the likes of AWS and Google Cloud to do Oracle DB deals because the company could always make support more difficult on those clouds.
    • Sure, Ellison is talking up Oracle Autonomous Database, but the technology is unique enough to make strange bedfellows.
    • Customers may demand more interoperability to make it easier to mix and match hyperscale compute.
    • The Oracle Database@Azure model isn't much different than retail store-within-a-store partnerships (Kohl's-Amazon, Kohl's Sephora, Apple in Best Buy etc.).
    • Oracle co-location strategy within other cloud data centers has lower capital expenses and gives the company more coverage. Simply put, Oracle Database@Azure, @AWS and @Google Cloud is damn good strategy and joint customers are everywhere.

    Today, multi-cloud really just means there are two or three providers operating in silos. For instance, Equifax uses AWS for Oracle mission critical workloads and Google Cloud for its data layer. Despite "all-in" press releases, most enterprises prefer to have a second cloud provider to keep the primary one honest.

    Regulators may also push these hyperscale cloud partnerships between archenemies. AWS recently said it will offer free data transfer for enterprises leaving the cloud. That move follows a similar announcement by Google Cloud. Microsoft Azure matched those announcements. You should check out the fine print on all those announcements.

    The Federal Trade Commission has launched an inquiry into cloud computing business practices including data egress fees, software licensing and minimum usage contracts.

    It's unclear whether regulators and Ellison will spur hyperscale cloud harmony, but stranger things have happened.

    Tech Optimization Microsoft Oracle Chief Information Officer

    Chirag Mehta on the intersection of cybersecurity, design thinking and AI

    Chirag Mehta on the intersection of cybersecurity, design thinking and AI

    New Constellation Research analyst Chirag Mehta outlined his approach to cybersecurity on DisrupTV. Chirag is the former Chief Product Officer at SaaS vendors Zipline and iCIMS and held various leadership roles at Google and SAP.

    Here are a few takeaways from his DisrupTV appearance.

    The variables in cybersecurity. Mehta said there are three key parts to cybersecurity. First, data driven signals and what a company can infer from them and respond. The second part is the human story. "We human beings are inherently trustworthy," he said. And third, what does the response look like? "We're all going to get breached. How can I respond to the test? What does that incident response kind of system look like?"

    Design thinking and cybersecurity. Mehta's previous stints revolved around developing applications and products at enterprise software companies. He focuses on design thinking to humanize cybersecurity for CXOs. "You need to be proactive, more outcome based, and risk based," said Mehta. "I'm passionate about helping CXOs find their way and make their organizations more secure."

     

    AI and cybersecurity. Perimeter and network-based approaches to cybersecurity are often flawed because "your employees are everywhere, and your data is everywhere." As a result, "AI has a role in creating a dynamic perimeter and what's going on in my environment," said Mehta. "The dynamic AI perimeter will happen, and the reason is the rise of AI means all of these problems are not solvable by human beings."

    "Sophisticated, AI-driven attacks need a sophisticated response, which is AI-driven," said Mehta.

    "You're going to have access to a vast amount of data telemetry, all the signals that you can analyze, and you can actually defend, including the behavior of your end users. You can defend against these attacks," he said.

    What is emerging is cybersecurity platforms that use AI to become a cyber operating system.

    ROI and cybersecurity. CXOs have said that cybersecurity budgets have been poached in the last year for AI projects. Mehta said that focus on returns is misplaced. He said:

    "Security is cost of doing business. If you don't have security, you won't have business, and then you won't have ROI on anything else. It's not very clear to most leaders that they need to invest into foundational technology so that they can actually have a business, invest in generative AI and everything else in the digital transformation journey. Cybersecurity is not an optional thing."

    Cybersecurity is a risk-based investment since one mistake can hit your stock price, result in SEC disclosure and harm the business, said Mehta. Enterprises will use AI to model various threat vectors and have a defense for any given situation.

    By nature, defenses will have to become more autonomous. "If the idea is that a human has to get involved when there's an attack you're not going to scale," said Mehta.

     

    Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Security Zero Trust Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

    Ethical Marketing: Navigating the Dark Arts of Paid Marketing & Bot-Driven Metrics in an Age of AI

    Ethical Marketing: Navigating the Dark Arts of Paid Marketing & Bot-Driven Metrics in an Age of AI

    CRTV SPECIAL EDITION: Ethical Marketing - Navigating The Dark Arts And Temptations Of Paid Marketing And Bot-Driven Metrics In an Age of AI

    In this special ConstellationTV live episode, CEO and founder R "Ray" Wang moderates a live panel of industry experts for a complimentary workshop on operating with trust and integrity in today’s #marketing landscape. You won't want to miss insights from the following experts:

    • John Furrier, Cofounder & CEO of SiliconANGLE & theCUBE
    • Crystal G., VP of CX and Ops, ARInsights
    • Molly Lauck, Director of Communications, CMTA
    • Ludovic Leforestier / Founder, Starsight Communications & The IIAR (Institute of Influencer & Analyst Relations) Board Member
    • Larry Dignan, Editor in Chief, Constellation Research, Inc. Insights
    • Liz Miller, VP & Principal Analyst, Constellation Research, Inc.

    Topics covered in today's session include:

    • The impact of paid #media vs organic audiences
    • What are sustainable approaches to building influence and audience
    • Where you can go to support independent #analysts and media who participate with integrity
    • What are the leading practices to building an audience with both paid media and organic approaches
    • What are the key marketing metrics in an Age of AI

    ConstellationTV is a bi-weekly web series hosted by Constellation analysts, tune in live at 9 AM PT/ 12 PM ET every other Wednesday! Subscribe to our YouTube channel: https://lnkd.in/gSw27hBU

    On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/Bs8fanTgzOY?si=ySrzAFniHaFQff1Y" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
    Media Name: YouTube Video Thumbnails.png