
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works .
Nvidia presented its updated data center product roadmap at its GPU Technology Conference this week, revealing several surprises but mostly reassuring that the company is on track to introduce a brand-new GPU architecture every couple of years and to update the AI GPU family every year. As it turns out, Nvidia intends to use die stacking and custom HBM memory with its Feynman GPUs, which will also be accompanied by its Rosa CPUs, previously never mentioned in the roadmap.
Just as expected, Nvidia plans to roll out its Vera Rubin platform this year, based on the Vera CPU and Rubin GPU. It will be accompanied by five additional processors, including the Groq LP30 low-latency inference accelerator, the BlueField-4 data processing unit (DPU), the NVLink-6 switch, Spectrum-X Ethernet with co-packaged optics, and the ConnectX 9 1600G SuperNIC.
The Vera Rubin platform is interesting not only because of the new CPU and GPU architectures, but also because Nvidia is integrating its Groq LPUs into its hardware portfolio. Furthermore, it looks like the company favors LPUs over its own Rubin CPX processors to the point that it no longer mentions the latter on the roadmap.
You may like Nvidia's focus on rack-scale AI systems is a portent for the year to come Examining Nvidia's 60 exaflop Vera Rubin POD — how seven chips underpin company's 40 rack AI factory supercomputer Nvidia removes Rubin CPX accelerators from its roadmap
Next year, the company plans to update its offerings with the Rubin Ultra AI accelerators, which will feature four compute chiplets and be equipped with 1 TB of HBM4E memory, thus dramatically increasing performance compared to this year's Rubin. In addition, these GPU accelerators will be mated with the Groq LP35 LPU, which will support the NVFP4 data format and therefore improve performance.
Yet another tangible performance improvement for Nvidia's AI platforms is the introduction of the company's Kyber NVL144 rack-scale solution, which will pack 144 Rubin Ultra GPU packages (enabled by an NVLink 7 switch) and therefore offer at least 4X performance improvement compared to Oberon NVL72 racks with 72 Blackwell GPU packages.
Nvidia's data center portfolio will improve in 2027 by increasing the number of GPUs per rack (i.e., quantitative improvements) and introducing a new LPU with NVFP4 support. The company's 2028 data center products will be based on all-new architectures that will bring qualitative improvements to the company's products.
"The next generation from here is Feynman," said Jensen Huang, chief executive of Nvidia, at the GTC. "Feynman has a new GPU, of course; it also has a new LPU LP40 […] now uniting the scale of Nvidia and the Groq building together LP40, it is going to be incredible. A brand-new CPU called Ros, short for Rosalyn, Bluefield-5, which connects the next CPU with the next SuperNIC CX10. We will have Kyber, which is copper scale up, and we will have Kyber CPO scale-up. So, for the first time we will scale up with both copper and co-packaged optics."
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/pc-components/gpus/SPONSORED_LINK_URL
- https://www.tomshardware.com/pc-components/gpus/nvidia-updates-data-center-roadmap-with-rosa-cpu-and-stacked-feynman-gpus-optical-nvlink-groq-lpus-with-nvfp4-and-nvlink-also-on-deck#main
- https://www.tomshardware.com
- Global chip supply chain under threat as US-Iran conflict enters third week — Strait of Hormuz blockade is days away from crippling Taiwan's semiconductor indus
- How AI Is Driving Revenue, Cutting Costs and Boosting Productivity for Every Industry in 2026
- Save over $100 on this feature-rich Asus AM5 motherboard with Wi-Fi 7, USB4 & DDR5-8000 support — TUF Gaming X870-Plus is on sale for just $170
- As Open Models Spark AI Boom, NVIDIA Jetson Brings It to Life at the Edge
- Apple's MacBook Neo modded to a 1 TB SSD, breaking the firm's 512 GB barrier — base 256 GB model gets modded in expert NAND swap surgery
Informational only. No financial advice. Do your own research.