Palantir and Nvidia build sovereign, on-premises AI reference architecture

Published March 12, 2026

Palantir launched its sovereign AI reference architecture that includes its software portfolio running on Nvidia GPUs, Nvidia AI Enterprise, CUDA and Nemotron models. The sovereign AI effort is designed to apply to on-premises deployments.

The effort is designed to provide a turnkey deployment for AI datacenters. The bet by Palantir and Nvidia is that enterprise AI will have a heavy dose of on-premise deployments. The Palantir AI OS Reference Architecture (AIOS-RA) includes the following.

  • Palantir applications such as AIP, Foundry, Apollo, Rubix, and AIP Hub tested and qualified to run on Nvidia's enterprise reference architectures.
  • Nvidia AI infrastructure that runs on Blackwell Ultra systems including eight Nvidia Blackwell Ultra GPUs and Nvidia Spectrum-X Ethernet networking
  • Palantir compute infrastructure with hardened Kubernetes running Foundry services.
  • Unified management plane with Rubix zero-trust Kubernetes and Palantir's Apollo autonomous deployment and management.
  • Palantir's AIP platform to connect enterprise models to data and systems.
  • Nvidia software including Nvidia AI Enterprise, CUDA-X Libraries, Nemotron open models and Magnum IO. See: Nvidia Nemotron: Much needed open-source model champion in US

According to Palantir, its AI OS reference architecture is useful for customers with existing GPU infrastructure that need low latency and data sovereignty. Palantir’s commercial business has surged for multiple quarters.

Here's a look at the deployment cadence from Palantir documentation.

Palantir, Nvidia architecture cadence