Cooling system for a single Nvidia Blackwell Ultra NVL72 rack costs a staggering $50,000 — set to increase to $56,000 with next-generation NVL144 racks

Cooling system for a single Nvidia Blackwell Ultra NVL72 rack costs a staggering $50,000 — set to increase to $56,000 with next-generation NVL144 racks

*Each Blackwell Ultra data center GPU consumes 1,400W, one Grace CPUs consumes 300W, SOCAMM memory consumes 200W per socket. Liquid cooling is used for two CPUs and eight GPUs per tray, whereas memory is equipped with heat spreaders.

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Anton Shilov Social Links Navigation Contributing Writer Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

Krieger-San I have to wonder just how much of this hardware is actually getting used to its full potential. Is there some level of wastage due to silicon sitting idle? Also, from my experience with LLMs; higher counts of parameters gives a diminishing return on higher quality results. If this is truly the case, IMO, these companies are sinking millions into hardware that may eventually be useless if the AI bubble really does pop hard. Reply

Stomx This is why it is important Krieger-San said: I have to wonder just how much of this hardware is actually getting used to its full potential. Is there some level of wastage due to silicon sitting idle? Also, from my experience with LLMs; higher counts of parameters gives a diminishing return on higher quality results. If this is truly the case, IMO, these companies are sinking millions into hardware that may eventually be useless if the AI bubble really does pop hard. This is why it is important to add even more "waste" into the chip in the form of fp32 and fp64. Bulldozers in the city dump places at least will dig for the gpus with HPC extensions when the AI will flop Reply

Bumstead Fortunately all these cooling issues and expense disappear when the servers are launched into space. /s Reply

SaltyFry Can we distribute this a little better?? You could put a server in my house it's cold here 🥶 I have to pay for the electricity anyway 😭 Reply

Vanderlindemedia Krieger-San said: I have to wonder just how much of this hardware is actually getting used to its full potential. Is there some level of wastage due to silicon sitting idle? Also, from my experience with LLMs; higher counts of parameters gives a diminishing return on higher quality results. If this is truly the case, IMO, these companies are sinking millions into hardware that may eventually be useless if the AI bubble really does pop hard. Over time the above hardware will become obsolete. Just like Bitcoin ASIC miners would be. It surprised me not only Nvidia or AMD is banking big (Billions) on the AI hype but also vendors of cooling equipment, PSU's and whatever you can think of. And i'm not even addressing the impact on the planet; all that power consumption is for 90% converted back into heat. You think it's a good idea to run DC's into oceans and keep a constant heat into it? But all these monstrous devices are not just deployed for your average Chat or AI Support agent. There's far more behind it then what they are telling you. AI starts to become very capable of replacing current human jobs. Even this (above) article is written through AI as you can easily mark the "Tailored" nonsense in the content. Reply

Pename Wow. So when the Chinese make their models more efficient, they save not just compute , but a ton on cooling and energy and rack space? Reply

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment