
But while Nvidia and other chipmakers make huge profits on AI, this cannot be said about actual system suppliers, as Nvidia reportedly intends to supply its partners with fully assembled Level-10 (L10) compute trays, effectively taking over most of the server's core design and manufacturing.
Instead of shipping individual components, Nvidia reportedly plans to deliver pre-built tray modules that include the Vera CPU, Rubin GPUs, memory, networking, power delivery, and liquid cooling. This goes beyond its earlier GB200 approach (L7–L8 integration) and would standardize nearly the entire compute subsystem. Since these trays could account for ~90% of a server's cost, ODMs and hyperscalers would no longer design motherboards or cooling systems themselves, but would focus only on rack-level integration, power infrastructure, cooling distribution (e.g., CDUs), and management software. While this certainly reduces their hassle, this also reduces their margins, which is naturally not something they would be happy about. In addition to margins, ODM lose ability to differentiate and innovate beyond what Nvidia offers them, which means that the only way for them to compete is to reduce prices at the cost of their profits.
Shipping pre-assembled trays to server makers could accelerate deployment timelines and reduce development costs, as Nvidia would rely on large EMS partners (likely Foxconn, Quanta, or Wistron) to mass-produce these complex systems. This is particularly relevant given the increasing engineering difficulty of next-gen hardware, including very thick PCBs, high-density designs, as well as rising GPU power consumption — reportedly scaling from 1.4 kW (Blackwell Ultra) to 1.8 kW and potentially higher, which drives the need for tightly integrated cooling solutions.
However, the shift would also redefine the role of ODMs, turning them from system designers into integrators and service providers, while increasing Nvidia's control over the value chain as well as its margins.
Going forward, this strategy could extend further into rack-scale systems like Kyber NVL576 (Kyber chassis doubles the number of GPU packages amid doubling the number of compute chiplets per package), especially as the industry moves toward 800V data center architectures and megawatt-class racks, which increases the chance that Nvidia may eventually expand its control beyond trays to full rack-level integration, reduce the role of ODMs even further, and shrink their margins even more.
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Anton Shilov is a contributing writer at Tom\u2019s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-18/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Anton Shilov Social Links Navigation Contributing Writer Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
alan.campbell99 Nvidia shafting their partners, surprise surprise. Who the eff is going to afford to deploy these at scale outside of perhaps hyperscalers without taking on massive debt? Will this add to the pile of allegedly shipped GPUs that are waiting for 'warm shells'? Got to wonder if these new systems can be accommodated in any data center construction that's in progress or fully planned out too. Reply
ekio I think it’s time some people start to say NO to ngreedia. They are abusing market from their position way too far. I can’t wait the AI bubble bursts. That will probably happen when OpenAI will fail next time to bullshit their way convincing they deserve extra fundings. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/price-of-nvidias-vera-rubin-nvl72-racks-skyrockets-to-as-much-as-usd8-8-million-apiece-but-server-makers-margins-will-be-tight-nvidia-is-moving-closer-to-shipping-entire-full-scale-systems#main
- https://www.tomshardware.com/subscription
- Grab this $8.98 TP-Link gigabit Ethernet switch for lag-free 4K streaming and gaming — save nearly 50% on silent unmanaged 5-port hub to instantly expand your L
- PC makers face shortages of Intel and AMD CPUs that stretch up to six months — lead time for orders jumps from just two weeks in the face of AI demand
- Unlock the ultimate PC maintenance combo with this electric screwdriver and air duster at a huge $70 saving right now — $89 Amazon bundle pairs Hoto's epic 25-b
- Pentagon formalizes Palantir's Maven AI as a core military system with multi-year funding — platform's investment grows to $13 billion from $480 million in 2024
- AWS Bahrain suffers major disruption due to the ongoing US-Iran conflict — drone activity blamed for service interruption
Informational only. No financial advice. Do your own research.