
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Mark Tyson Social Links Navigation News Editor Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.
Stomx He had two GPUs with 480GB each on them. How this huge 480GB LPDDR5X memory amount got connected to any NVIDIA GPU when they never had so far a single GPU with the spec with such large memory amount ? The GPU memory is always soldered on the boards near the GPU chip and there is no place around the chip for that amount of memory. Even coming Vera Rubin modules have almost twice less memory (they are HBM4 not GDDR but still). Reply
edzieba Stomx said: He had two GPUs with 480GB each on them. Two CPU+GPU modules, each with 96GB HBM onboard the GPU package, and 480GB offboard the module (on the motherboard). This isn't the same architecture as consumer devices use. Reply
Stomx edzieba said: Two CPU+GPU modules, each with 96GB HBM onboard the GPU package, and 480GB offboard the module (on the motherboard). This isn't the same architecture as consumer devices use. I deleted my first post by mistake above. Of course this is not a consumer device but something is strange here. 480GB LPDDR5 were probably on the system block running by regular processors feeding either 2x Nvidia H100 or 2x Grace-Hopper Superchips GB200. But there were only 2 HBM3 modules. That means either H100 or GB200 were without HBM3? 2x 96GB is not enough to run even full DeepSeek R1 Reply
edzieba Stomx said: I deleted my first post by mistake above. Of course this is not a consumer device but something is strange here. 480GB LPDDR5 were probably on the system block running by regular processors feeding either 2x Nvidia H100 or 2x Grace-Hopper Superchips GB200. But there were only 2 HBM3 modules. That means either H100 or GB200 were without HBM3? 2x 96GB is not enough to run even full DeepSeek R1 The HBM (96GB) is directly connected to the GPU, the LPDDR is directly connected to the CPU, and the GPU can access that memory over the NVLink bus between the two (and vice versa). Since the NVLink bandwidth (900GB/s) is greater than LPDDR bandwidth (512GB/S), there is minimal bottleneck for use of the LPDDR connected to the CPU than if it were connected directly to the GPU. Reply
teeejay94 "technically" but in reality a terabyte of rM isn't worth the cost of a house. It doesn't cost that much to produce sonit can't cost that much, whatever ram is sitting at now is a fake inflated number, things cost what they cost relative to the cost of production. So no, it doesn't cost 20K$ or whatever ludicrous amount it adds up , to produce the ram. Reply
DingusDog teeejay94 said: "technically" but in reality a terabyte of rM isn't worth the cost of a house. It doesn't cost that much to produce sonit can't cost that much, whatever ram is sitting at now is a fake inflated number, things cost what they cost relative to the cost of production. So no, it doesn't cost 20K$ or whatever ludicrous amount it adds up , to produce the ram. What are you on about? Nobody said it costs that much to produce. It's called supply and demand that fake inflated number is what people are (begrudgingly) willing to pay. Reply
TheOtherOne teeejay94 said: "technically" but in reality a terabyte of rM isn't worth the cost of a house. It doesn't cost that much to produce sonit can't cost that much, whatever ram is sitting at now is a fake inflated number, things cost what they cost relative to the cost of production. So no, it doesn't cost 20K$ or whatever ludicrous amount it adds up , to produce the ram. A house "produced" (built) 50 years ago would sell for probably 10x of that cost. Even if taking the inflation to account for "production" cost, the value of that house currently is exactly what market is willing to pay. Reply
Stomx So the confusing part was that GrassHopper Superchip is not an additional separate chip but the name of the architecture. Separate chips were 2x processors and 2x H100 combined by NVlink into this unified "superchip" which addresses though several times slower LPDDR5X memory but a lot larger one than HBM3 in size. OK, what's the result? How many tokens per second it delivers on DeepSeek R1 ? Was this system dig out from California city dumps and landfill places? This is what will happen when AI flop Reply
endocine in a few years, these are going to be widely available Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/desktops/servers/SPONSORED_LINK_URL
- https://www.tomshardware.com/desktops/servers/developer-gambles-on-obviously-fake-usd8k-grace-hopper-system-scores-usd80-000-worth-of-hardware-on-reddit-for-one-tenth-of-the-cost-buyers-haul-includes-960gb-of-ddr5-ram-worth-more-than-what-he-paid-for-the-entire-rig#main
- https://www.tomshardware.com
- NVIDIA and AWS Expand Full-Stack Partnership, Providing the Secure, High-Performance Compute Platform Vital for Future Innovation
- NVIDIA Partners With Mistral AI to Accelerate New Family of Open Models
- This $549 Acer gaming laptop is simply unbeatable value with Nvidia RTX 5050 GPU and 16GB RAM — rival model also on sale for $50 more, featuring AMD Ryzen CPU a
- Industry preps new 'cheap' HBM4 memory spec with narrow interface, but it isn't a GDDR killer — JEDEC's new SPHBM4 spec weds HBM4 performance and lower costs to
- Photonic latch memory could enable optical processor caches that run up to 60 GHz, twenty times faster than standard caches — optical SRAM stores and outputs da
Informational only. No financial advice. Do your own research.