Industry preps new ‘cheap’ HBM4 memory spec with narrow interface, but it isn’t a GDDR killer — JEDEC’s new SPHBM4 spec weds HBM4 performance and lower costs to

Industry preps new 'cheap' HBM4 memory spec with narrow interface, but it isn't a GDDR killer — JEDEC's new SPHBM4 spec weds HBM4 performance and lower costs to

usertests For consumers, we need "3D DRAM" to increase capacity, stacked DRAM won't do it cheaply. Bandwidth of GDDR7 is good enough for the most part, and rising. GDDR does lose out on efficiency, oh well. Reply

teeejay94 So you literally just said it yourself it's cheaper for basically everyone if there's higher orders of HBM memory to go into GPUs and such. LMFAO. So does the current supply and demand model only apply when suppliers feel like it? Id say , yes. 100% they are cherry picking what they want to see for a lot and sell for less. Somehow higher orders of HBM memory will decrease the costs , mind bogglingly though higher orders of UDIMM ram have shot the costs through the roof. Simply amazing the mental gymnastics you gotta do to make this make sense. The truth is when something is produced on a much larger scale it costs less to produce, no matter the product, keep that in mind. It's really not hard to figure out you've been scammed and gouged by ram manufacturers. And some of you actually believe it which is the craziest part to me. Some of you actually believe it costs more to produce things on a larger scale, you get products like memory chips cheaper the more you order 💡 Reply

thestryker usertests said: For consumers, we need "3D DRAM" to increase capacity, stacked DRAM won't do it cheaply. I don't really think higher capacity is particularly important yet until something shifts requirements wise though. CAMM2 in both LPDDR and DDR form can reach 64GB which is more than the vast majority of people need. DIMMs and SODIMMs are up to 64GB capacity per module. At this point I'm kind of hoping that all the focus on enterprise sales will mean as things normalize we'll see more high capacity kits at reasonable prices. Also hoping that memory IC built on newer manufacturing processes will scale better bandwidth/latency/capacity wise. We've seen a little bit of that with 32Gb IC, but most of those kits have just been announced rather than hitting the market. Reply

usertests thestryker said: I don't really think higher capacity is particularly important yet until something shifts requirements wise though. CAMM2 in both LPDDR and DDR form can reach 64GB which is more than the vast majority of people need. DIMMs and SODIMMs are up to 64GB capacity per module. We're looking at up to 10 years before it hits market. The most obvious consumer/prosumer application of higher capacities would be larger local LLMs, in the 100 billion to 1 trillion parameter range. Outside of purely text, I think some of the video generation models can already use over 50 GB of memory. Simply increasing capacity while lowering the cost should open up new non-AI applications though. I think we feel that we have more than enough because DRAM scaling slowed down so much over the last 15 years, that it couldn't be "wasted" at the same rates anymore. Reply

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment