
But while Nvidia and other chipmakers make huge profits on AI, this cannot be said about actual system suppliers, as Nvidia reportedly intends to supply its partners with fully assembled Level-10 (L10) compute trays, effectively taking over most of the server's core design and manufacturing.
Instead of shipping individual components, Nvidia reportedly plans to deliver pre-built tray modules that include the Vera CPU, Rubin GPUs, memory, networking, power delivery, and liquid cooling. This goes beyond its earlier GB200 approach (L7–L8 integration) and would standardize nearly the entire compute subsystem. Since these trays could account for ~90% of a server's cost, ODMs and hyperscalers would no longer design motherboards or cooling systems themselves, but would focus only on rack-level integration, power infrastructure, cooling distribution (e.g., CDUs), and management software. While this certainly reduces their hassle, this also reduces their margins, which is naturally not something they would be happy about. In addition to margins, ODM lose ability to differentiate and innovate beyond what Nvidia offers them, which means that the only way for them to compete is to reduce prices at the cost of their profits.
Shipping pre-assembled trays to server makers could accelerate deployment timelines and reduce development costs, as Nvidia would rely on large EMS partners (likely Foxconn, Quanta, or Wistron) to mass-produce these complex systems. This is particularly relevant given the increasing engineering difficulty of next-gen hardware, including very thick PCBs, high-density designs, as well as rising GPU power consumption — reportedly scaling from 1.4 kW (Blackwell Ultra) to 1.8 kW and potentially higher, which drives the need for tightly integrated cooling solutions.
However, the shift would also redefine the role of ODMs, turning them from system designers into integrators and service providers, while increasing Nvidia's control over the value chain as well as its margins.
Going forward, this strategy could extend further into rack-scale systems like Kyber NVL576 (Kyber chassis doubles the number of GPU packages amid doubling the number of compute chiplets per package), especially as the industry moves toward 800V data center architectures and megawatt-class racks, which increases the chance that Nvidia may eventually expand its control beyond trays to full rack-level integration, reduce the role of ODMs even further, and shrink their margins even more.
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Anton Shilov is a contributing writer at Tom\u2019s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-18/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Anton Shilov Social Links Navigation Contributing Writer Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
alan.campbell99 Nvidia shafting their partners, surprise surprise. Who the eff is going to afford to deploy these at scale outside of perhaps hyperscalers without taking on massive debt? Will this add to the pile of allegedly shipped GPUs that are waiting for 'warm shells'? Got to wonder if these new systems can be accommodated in any data center construction that's in progress or fully planned out too. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/price-of-nvidias-vera-rubin-nvl72-racks-skyrockets-to-as-much-as-usd8-8-million-apiece-but-server-makers-margins-will-be-tight-nvidia-is-moving-closer-to-shipping-entire-full-scale-systems#main
- https://www.tomshardware.com
- Nintendo reportedly plans to cut Switch 2 production by 33% after a lackluster holiday season — gaming giant slashes 2 million units from planned output
- Microsoft blocks registry trick that unlocked performance-boosting native NVMe driver on Windows 11 — workarounds still exist to enable support, however
- Imagination Tech working on mainstream PC gaming with ‘ambitious graphics card and SoC design companies’ — shows off progress with DirectX 11 workloads
- RTX 5060 in a house fire suffers melted shroud and fans, but survives with PCB intact — scorched GPU just needed a cleanup and a new cooler for full restoration
- Crimson Desert devs apologize for ‘confusion’ over Intel GPU FAQ — backtracks over prior dismissive language regarding Arc graphics support
Informational only. No financial advice. Do your own research.