CoreWeave launches latest Nvidia instances, adds Weights & Biases tools
CoreWeave said its Nvidia HGX B300 instances, part of the Nvidia Blackwell Ultra platform, are generally available on its cloud infrastructure and said it will be among the first to deploy the Nvidia Vera Rubin NVL72 platform and Nvidia Vera CPU rack in production in the second half of 2026.
The cloud provider also said that it added new tools to its Weights & Biases software to streamline reinforcement learning and agent development workflows. CoreWeave’s software platforms are becoming a bigger part of the company and has the promise of improving profit margins over time and the Weights & Biases is key to its developer story. CoreWeave's moves come as Nvidia is shifting emphasis to AI inference and moving AI agents and physical AI workloads to production.
CoreWeave, which announced the news at Nvidia GTC 2026, has been known to be among the first to bring the latest Nvidia infrastructure to customers. It has also recently added flexible pricing that can give enterprises new options to try out Nvidia's latest AI infrastructure. CoreWeave is positioning itself as a purpose‑built AI cloud with an integrated stack (infrastructure, storage, runtime, observability, and human expertise), not just raw GPUs.
- CoreWeave tops $5 billion in revenue for 2025, projects more hypergrowth 2026, 2027
- CoreWeave's great AI infrastructure race
- CoreWeave acquires Monolith, eyes industrial AI use cases
CEO Michael Intrator said CoreWeave is looking to enable enterprises "to build and refine autonomous agents faster and more reliably than ever before." Nvidia CEO Jensen Huang said during his keynote that AI workloads are "shifting from training to operating agents at scale."
Here's a look at what CoreWeave announced at Nvidia GTC 2026.
- Nvidia HGX B300 instances. CoreWeave said HGX B300 is available on its cloud for AI reasoning and inference with a 50% increase in memory over B200 instances. The instances include 2.1 TB of HBM3e memory, Nvidia Quantum X800XDR InfiniBand, and liquid cooling.
- Second half availability of Nvidia Vera Rubin NVL72 platform and Nvidia Vera CPU racks for inference and agentic AI.
- Training for agents with environment free reinforcement learning. Post training LLMs for agentic tasks typically relies on simulated environments. CoreWeave's Weights & Biases unit is offering environment-free training on its Serverless RL offering. CoreWeave said this effort equates to "on-the-job" training for AI agents and said Serverless RL provides 1.4x faster training with inference at up to 5x lower costs and 60x lower latency.
- Agent evaluations in Weights & Biases Weave, which connects research and production as well as learning from real user interactions. This feedback loop is designed for agent self-improvement, accelerated development with alerts for failures and continuous monitoring.
- Weights & Balances for robotics AI. CoreWeave Weights & Biases is releasing two blueprints with Nvidia focused on reinforcement learning and vision-language action models using Nvidia Issac Lab simulations.
- Training monitoring via Weights & Biases mobile app.
Leading up to Nvidia GTC 2026, CoreWeave launched two new consumption models designed for AI workloads that may differentiate the neocloud from the pack.
The first model is Flex Reservations, which guarantees peak capacity for workloads that ramp or scale unevenly. Customers buy a capacity ceiling with a lower 24/7 holding fee and only pay full usage rates when instances are active. Flex Reservations are in preview.
In addition, CoreWeave launched Spot, which is a lower-cost option for batch analytics and backfills that can tolerate interruptions. Spot is generally available.
Separately, CoreWeave announced a preview of Dedicated Inference, which gives customers the ability to run custom models in production on chose GPUs.