
Cooling system for a single Nvidia Blackwell Ultra NVL72 rack costs a staggering $50,000
(Image credit: Nvidia/YouTube) (Image credit: Nvidia/YouTube) This move promises to shorten the ramp for VR200 as Nvidia's partners will not have to design everything in-house and could lower production costs due to the volume of scale ensured by a direct contract between Nvidia and an EMS (most likely Foxconn as the primary supplier and then Quanta and Wistron, but that is speculation). For example, a Vera Rubin Superchip board recently demonstrated by Jensen Huang uses a very complex design, a very thick PCB, and only solid-state components. Designing such a board takes time and costs a lot of money, so using select EMS provider(s) to build it makes a lot of sense.
J.P. Morgan reportedly mentions the increase in power consumption of one Rubin GPU from 1.4 kW (Blackwell Ultra) to 1.8 kW (R200) and even 2.3 kW (a previously unannounced TDP for an allegedly unannounced SKU (Nvidia declined a Tom's Hardware request for comment on the matter) and increased cooling requirements as one of the motivations for moving to supply the whole tray instead of individual components. However, we know from reported supply chain sources that various OEMs and ODMs, as well as hyperscalers like Microsoft , are experimenting with very advanced cooling systems, including immersion and embedded cooling, which underscores their experience.
However, Nvidia's partners will shift from being system designers to becoming system integrators, installers, and support providers. They are going to keep enterprise features, service contracts, firmware ecosystem work, and deployment logistics, but the 'heart' of the server — the compute engine — is now fixed, standardized, and produced by Nvidia rather than by OEMs or ODMs themselves.
Also, we can only wonder what will happen with Nvidia's Kyber NVL576 rack-scale solution based on the Rubin Ultra platform, which is set to launch alongside the emergence of 800V data center architecture meant to enable megawatt-class racks and beyond. Now the only question is whether Nvidia further increases its share in the supply chain to, say, rack-level integration?
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/jp-morgan-says-nvidia-is-gearing-up-to-sell-entire-ai-servers-instead-of-just-ai-gpus-and-componentry-jensens-master-plan-of-vertical-integration-will-boost-profits-purportedly-starting-with-vera-rubin#main
- https://www.tomshardware.com
- Asus, MSI, other manufacturers panic-buying RAM stocks, while major memory chipmakers rake in profits — massive demand for HBM and RDIMM for data centers drivin
- Grab this £59.99 1TB Samsung 990 Evo Plus SSD while you can — fast PCIe 4.0 speeds and a 29% saving make this a must-have storage upgrade
- OpenAI’s colossal AI data center targets would consume as much electricity as entire nation of India — 250GW target would require 30 million GPUs annually to en
- OpenAI’s colossal AI data center targets would consume as much electricity as entire nation of India — 250GW target would require 30 million GPUs annually to en
- Fueling Economic Development Across the US: How NVIDIA Is Empowering States, Municipalities and Universities to Drive Innovation
Informational only. No financial advice. Do your own research.