NVIDIA Rubin Platform, Open Models, Autonomous Driving: NVIDIA Presents Blueprint for the Future at CES

NVIDIA Rubin Platform, Open Models, Autonomous Driving: NVIDIA Presents Blueprint for the Future at CES

Extreme codesign — designing all these components together — is essential because scaling AI to gigascale requires tightly integrated innovation across chips, trays, racks, networking, storage and software to eliminate bottlenecks and dramatically reduce the costs of training and inference, Huang explained.

He also introduced AI-native storage with NVIDIA Inference Context Memory Storage Platform — an AI‑native KV‑cache tier that boosts long‑context inference with 5x higher tokens per second, 5x better performance per TCO dollar and 5x better power efficiency.

Put it all together and the Rubin platform promises to dramatically accelerate AI innovation, delivering AI tokens at one-tenth the cost. “The faster you train AI models, the faster you can get the next frontier out to the world,” Huang said. “This is your time to market. This is technology leadership.”

NVIDIA’s open models — traained on NVIDIA’s own supercomputers — are powering breakthroughs across healthcare, climate science, robotics, embodied intelligence and autonomous driving.

“Now on top of this platform, NVIDIA is a frontier AI model builder, and we build it in a very special way. We build it completely in the open so that we can enable every company, every industry, every country, to be part of this AI revolution.”

The portfolio spans six domains — Clara for healthcare, Earth-2 for climate science, Nemotron for reasoning and multimodal AI, Cosmos for robotics and simulation, GR00T for embodied intelligence and Alpamayo for autonomous driving — creating a foundation for innovation across industries.

“These models are open to the world,” Huang said, underscoring NVIDIA’s role as a frontier AI builder with world-class models topping leaderboards. “You can create the model, evaluate it, guardrail it and deploy it.”

Huang emphasized that AI’s future is not only about supercomputers — it’s personal.

Huang showed a demo featuring a personalized AI agent running locally on the NVIDIA DGX Spark desktop supercomputer and embodied through a Reachy Mini robot using Hugging Face models — showing how open models, model routing and local execution turn agents into responsive, physical collaborators.

“The amazing thing is that is utterly trivial now, but yet, just a couple of years ago, that would have been impossible, absolutely unimaginable,” Huang said.

The world’s leading enterprises are integrating NVIDIA AI to power their products, Huang said, citing companies including Palantir, ServiceNow, Snowflake, CodeRabbit, CrowdStrike, NetApp and Semantec.

“Whether it’s Palantir or ServiceNow or Snowflake — and many other companies that we’re working with — the agentic system is the interface.”

At CES, NVIDIA also announced that DGX Spark delivers up to 2.6x performance for large models, with new support for Lightricks LTX‑2 and FLUX image models, and upcoming NVIDIA AI Enterprise availability.

AI is now grounded in the physical world, through NVIDIA’s technologies for training, inference and edge computing.

These systems can be trained on synthetic data in virtual worlds long before interacting with the real world.

Huang showcased NVIDIA Cosmos open world foundation models trained on videos, robotics data and simulation. Cosmos:

Generates realistic videos from a single image

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment