
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works .
Micron has announced that it has entered high-volume production of its HBM4 36GB 12-Hi memory, designed for Nvidia's Vera Rubin GPU platform. Making the announcement at GTC 2026, the memory giant simultaneously confirmed high-volume production of the industry's first PCIe 6.0 data center SSD and a new SOCAMM2 module, making it the first memory supplier to bring all three products to volume shipment for the Vera Rubin ecosystem at the same time.
The HBM4 36GB 12H stack runs at over 11 Gb/s pin speeds, delivering bandwidth greater than 2.8 TB/s. Compared to Micron's HBM3E at the same 36GB 12H configuration, that represents a 2.3 times bandwidth increase alongside more than 20% improvement in power efficiency, according to Micron's internal power calculator data.
"The next era of AI will be defined by tightly integrated platforms developed through joint engineering innovations across the ecosystem. Our close collaboration with NVIDIA ensures that compute and memory are designed to scale together from day one," said Sumit Sadana, executive vice president and chief business officer at Micron Technology, in a press release. "With HBM4 36GB 12H, alongside the industry's first SOCAMM2 and Gen6 SSD now in high-volume production, Micron's memory and storage form a core foundation that unlocks the full potential of next-generation AI."
You may like Nvidia refutes reports of HBM4 mass production delay, production 'on track' for the second half of 2025 SK hynix shows 16-Hi HBM4 memory for AI accelerators Nvidia CEO confirms Vera Rubin NVL72 is now in production Micron has also shipped samples of a 48GB 16H HBM4 stack to customers. The additional four die layers give the 16H configuration a 33% capacity increase per HBM placement over the 36GB 12H product, a milestone that points toward denser configurations in future AI accelerator generations.
Last month, the company announced that the 9650 SSD had entered mass production , marking the first time that a PCIe 6.0 SSD had entered that stage of production. The drive supports up to 28 GB/s sequential read throughput and 5.5 million random read IOPS, doubling PCIe 5.0 read performance at 100% higher performance per watt. Unsurprisingly, it targets AI inference, training, and agentic workloads in liquid-cooled environments and is optimized for Nvidia's BlueField-4 STX reference architecture.
Meanwhile, the 192GB SOCAMM2 module is designed for Nvidia Vera Rubin NVL72 systems and standalone Vera CPU platforms, with Micron's SOCAMM2 portfolio spanning 48GB to 256GB capacities. The Vera Rubin platform supports up to 2TB of memory and 1.2 TB/s of bandwidth per CPU using the module.
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/pc-components/dram/SPONSORED_LINK_URL
- https://www.tomshardware.com/pc-components/dram/micron-enters-high-volume-production-of-hbm4-for-nvidia-vera-rubin#main
- https://www.tomshardware.com
- Grab 32GB of Corsair DDR5 RAM for just $111 in this epic Newegg combo with the 9850X3D — $1,020 bundle for an AMD gaming PC build includes an Asus X870E-E mothe
- ASRock launches new Frankensteined motherboard with one DDR4 slot and two DDR5 slots — Intel board signals the RAM apocalypse is truly nigh
- Nvidia's Nemotron coalition brings eight AI labs together to build open frontier models
- Nvidia Groq 3 LPU and Groq LPX racks join Rubin platform at GTC — SRAM-packed accelerator boosts 'every layer of the AI model on every token'
- Global chip supply chain under threat as US-Iran conflict enters third week — Strait of Hormuz blockade is days away from crippling Taiwan's semiconductor indus
Informational only. No financial advice. Do your own research.