Cybersecurity, Nvidia GTC, CEO Mandate for AI | ConstellationTV Episode 126

March 25, 2026

AI Is Now the CEO’s Job: Key Trends from Enterprise Tech, Nvidia GTC, and Constellation’s CFF/AIF

Artificial intelligence has moved from experimental projects at the edge of the business to the center of boardroom strategy. Recent conversations across enterprise technology, Nvidia’s GTC conference, and Constellation Research’s own Future Forum (CFF) and AI Forum (AIF) show the same pattern: AI is no longer just an IT concern. It is a structural force reshaping security, infrastructure, operating models, and even geopolitics.

This blog highlights three major trend areas reflected in CRTV Episode 126 and the surrounding events:

  1. Enterprise technology and security news
  2. Nvidia GTC and the next phase of AI infrastructure
  3. CEO- and board-level themes from Constellation’s CFF and AIF

Enterprise Tech & Security: “Security Is a Data Problem”

The platformization of cybersecurity

A major trend in enterprise technology is the rapid convergence of security and data platforms. Traditional cybersecurity vendors have long focused on endpoints and perimeter defense. Now, data-native platforms are moving aggressively into security:

  • Databricks is positioning itself as a security player by building security capabilities directly on top of its data platform, reducing the need for costly data ingestion into separate SIEM tools.
  • Elastic is embedding security natively into its search and observability stack, again emphasizing proximity to data.
  • ServiceNow has expanded into security through acquisitions, bringing a workflow-centric approach that ties incident response and security operations into broader business processes.

The emerging pattern: “security is a data problem.” Rather than shipping telemetry to specialized tools and paying for duplicated storage, enterprises are increasingly looking to secure data where it already lives and use AI on that data to detect and respond to threats.

Agentic AI and the next security challenge

As agentic AI systems proliferate—autonomous or semi-autonomous agents acting on behalf of users and applications—security thinking is shifting again. At events like RSA, this is showing up as:

  • A focus on securing agents and their toolchains rather than just securing static applications.
  • Increased emphasis on governance of data flows into and out of agents, including prompt injection defenses, data leakage controls, and policy enforcement at the data layer.

Non-traditional security vendors with strong data and AI capabilities are using this moment to undercut endpoint-centric incumbents by promising:

  • Less data movement
  • Lower ingestion and storage costs
  • Tighter integration between analytics, AI, and security controls

Advertising and AI: from “digital” to “answer engines.”

On the marketing and media side, enterprises are preparing for an era of “answer engine marketing” or “generative engine marketing”—where AI systems surface answers, not pages, and where ads may be injected directly into those answers.

Key trends:

  • Generative AI in discovery: Platforms like Apple Music (in partnership with Ticketmaster), Google TV (with Gemini), and Amazon’s ad stack are using AI to drive content discovery and personalization across connected devices.
  • CTV and data-driven targeting: Connected TV has become a prime advertising surface, powered by granular audience data and predictive models.
  • Resurgence of “old” channels: Direct mail (e.g., Valpak) and outdoor advertising are benefiting from precise data-driven segmentation and dynamic content, contradicting past assumptions that these channels were “dead.”

The core driver behind all these developments is the same: better use of audience and behavioral data, appropriately secured and governed, to place the right content in front of the right person at the right moment—across digital, physical, and hybrid surfaces.


Nvidia GTC: The AI Factory Era

Pivot to inference and the Vera Rubin platform

Nvidia’s GTC conference—as some have called it, the “Woodstock of AI”—highlighted a clear shift: from a narrow focus on training large models to a broader emphasis on inference at industrial scale.

A central symbol of this shift is the Vera Rubin system, designed as a next-generation AI “factory”:

  • It is optimized around inference workloads, not just training.
  • It introduces specialized components such as the Language Processing Unit (LPU) to improve efficiency and responsiveness for language-heavy inference and agentic systems.
  • It combines GPUs, CPUs, storage, and ultra-fast networking (e.g., NVLink and associated interconnects) in a tightly integrated fabric.

The direction is clear: enterprises are expected to build or rent AI factories—clusters of systems like Vera Rubin—to serve as the backbone for running fleets of agents and AI applications in production.

Software, open models, and vertical LLMs

Nvidia continues to consolidate its position not only as a hardware provider, but as a software and ecosystem orchestrator:

  • A growing catalog of vertical LLMs and domain-specific models spans areas such as healthcare, robotics, simulation, synthetic data generation, and financial services.
  • Many of these models are made available through Nemo and related initiatives that emphasize open or open-weighted approaches, helping enterprises jumpstart projects without building everything from scratch.
  • The Nemo Tron Alliance and similar collaborations are creating an ecosystem of startups aligned around Nvidia-centric tooling, hardware, and deployment patterns.

While Nvidia does not position itself as an enterprise software vendor in the traditional sense, it is building a de facto AI operating environment that reduces friction for adopting its hardware. In this model, “software is free” in the sense that it helps drive demand for infrastructure.

Energy, scale, and the move toward space-based AI

A more experimental but increasingly serious thread involves energy constraints and new compute geographies:

  • There is growing recognition that large-scale AI will be constrained by power availability as much as by chips or models.
  • Concepts such as space-based data centers powered by solar have moved from science fiction to early-stage architectural thinking, with renderings and prototypes being discussed publicly.
  • Ambitions around custom fabs (e.g., “Terra fab”) underscore a trend toward vertical integration: from chip design to fabrication to deployment in specialized environments (including space, automotive, robotics, and beyond).

While these scenarios may be years away from mainstream enterprise adoption, they reflect a larger reality: AI strategy is increasingly inseparable from energy strategy and geopolitics.


CFF & AIF: AI as a CEO and Board Mandate

Constellation Research’s Future Forum (CFF) and AI Forum (AIF) brought together CEOs, board members, CIOs, and technology leaders. Across these events, a single message cut through the noise:

AI is the CEO’s job.

From “digital transformation” déjà vu to real accountability

The notion that “AI is a CEO issue” echoes earlier eras when customer experience (CX) and digital transformation were framed as top-of-the-house responsibilities. In many organizations, those prior mandates never fully materialized:

  • CX initiatives remained fragmented across marketing, sales, and service, hampered by siloed data and misaligned incentives.
  • Digital transformation often turned into incremental digitization rather than true business model reinvention.

The AI moment feels similar—but with higher stakes. This time:

  • Boards are directly asking CEOs, “What is our AI strategy?”
  • CEOs are expected to own enterprise-wide alignment, not delegate AI entirely to IT or innovation labs.
  • Leaders who fail to steer AI in a coherent way may face personal accountability faster than in past transformation waves.

Culture, operating models, and the pace of change

Participants at CFF and AIF consistently highlighted culture and operating model as the real bottlenecks, not technology:

  • Organizations with entrenched functional silos and conflicting incentives (e.g., sales, marketing, and service) struggle to align on data and AI.
  • AI adoption requires rethinking processes and “sacred cows”—from compensation models to decision rights.
  • There is a growing knowledge gap between boards, CEOs, and AI practitioners, turning basic AI discussions into a game of telephone.

Simultaneously, the pace of change in AI models and platforms makes strategic planning more complex:

  • A decision to standardize on one model provider (e.g., OpenAI) can look outdated within months as competitors like Anthropic or open-source ecosystems catch up or leap ahead.
  • Enterprises are wary of becoming locked into any single foundation model, vendor, or cloud platform, driving interest in model-agnostic architectures and multi-model strategies.

Open source, “claws,” and safe experimentation

At the AI Forum in particular, there was an animated discussion around open AI stacks, including tools like OpenCLIP / Open-weights models and various “claw” frameworks that allow organizations to orchestrate and govern multiple models and agents.

Key enterprise concerns include:

  • How to experiment safely with open and local models without exposing sensitive data or critical systems.
  • How to define governance boundaries for internal innovators who will inevitably bring in new tools and frameworks.
  • How to leverage open ecosystems to avoid vendor lock-in, while still meeting requirements around support, compliance, and security.

Many leaders are adopting a tiered experimentation strategy:
using isolated, low-risk environments (e.g., dedicated machines or sandbox infrastructure) for early trials, before integrating new tools into core data and application stacks.

Geopolitics, power, and sovereignty

Finally, CFF and AIF discussions reinforced that the AI strategy is now entangled with geopolitics:

  • Differences in energy costs, regulatory regimes, and data sovereignty are shaping where and how AI can be deployed at scale.
  • Regions such as China and India are developing their own models, data practices, and infrastructure approaches, which may diverge meaningfully from North American and European patterns.
  • Executives are increasingly aware that power availability, chip supply, and national policy can materially impact AI roadmaps, even though these factors are largely outside their direct control.

For many enterprises, this means AI planning must include scenarios for supply chain volatility, regulatory shifts, and regional model ecosystems, not just technology capabilities.


From Hype to Hard Choices

Across enterprise news, Nvidia’s GTC announcements, and Constellation’s CFF and AIF, a consistent picture emerges:

  • Security is converging with data and AI, pushing enterprises toward platforms that can analyze and protect information in place.
  • AI infrastructure is industrializing, with “AI factories” and specialized hardware/software stacks designed for always-on inference and agents.
  • AI has become a CEO and board mandate, forcing organizations to confront culture, operating models, vendor lock-in, power constraints, and geopolitics—not just tools and models.

The organizations that navigate this moment successfully will treat AI not as a set of disconnected pilots, but as a strategic, cross-functional transformation that redefines how they create value, manage risk, and compete.

Your Hosts