
"We are trying to use AI wherever we can in our design process," Dally told Google's Jeff Dean . "I would love to have the end-to-end stage where I could simply say, 'design me the new GPU,' but I think we are a long way from that."
Nvidia is already using AI across multiple stages of chip design, from circuit-level optimizations to system-level exploration, and achieves orders-of-magnitude productivity gains and, in some cases, better-than-human results, according to Dally.
You may like Elon Musk reveals roadmap with nine-month cadence for new AI processor releases, beating Nvidia and AMD's yearly cadence Intel's roadmap adds mysterious 'hybrid' AI processor featuring x86 CPUs, dedicated AI accelerator, and programmable IP Nvidia now produces three times as much code as before AI At the lowest level, AI has already transformed standard cell development, one of the most time-consuming steps when transitioning to a new fabrication process. Porting a standard cell library of roughly 2,500–3,000 cells previously required a team of eight engineers working for about 10 months, according to Dally. Nvidia has replaced this work with a reinforcement learning system called NB-Cell, which can now complete the same task overnight on a single GPU.
At a higher level, Nvidia has developed internal large language models — Chip Nemo and Bug Nemo — trained on proprietary architecture documentation covering all GPUs that Nvidia has ever developed. These LLMs can act like engineering assistants who can explain to junior designers how complex hardware blocks work. As a result, Nvidia no longer has to bother senior engineers about things that can be done by LLMs.
"We had a series of LLMs that we called Chip Nemo and Bug Nemo. We took a generic LLM, and then we fine-tuned it by feeding it all of the design documents proprietary to Nvidia," Dally said. "So this is stuff you cannot get outside the company, it is all of the RTL, hardware design docs, all of the RTL for every GPU ever designed at Nvidia, all of the architecture specs for those. Now you have this LLM that is actually very smart about GPU design. […] When you have a junior designer, they can ask Chip Nemo, and Chip Nemo will explain [how GPUs work]. It improves productivity that way; it is a very patient mentor."
Beyond cell libraries and engineering assistance, Nvidia is applying reinforcement learning to classical circuit design problems. For example, an RL-based system explores design options in a trial-and-error manner, and this approach helps to create chip designs that exceed human results in terms of area, power, and performance faster than humans can.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidia-says-ai-cuts-10-month-eight-engineer-gpu-design-task-to-overnight-job-company-is-still-a-long-way-from-ai-designing-chips-without-human-input#main
- https://www.tomshardware.com
- Acer Predator GX850 SFX power supply review: Solid electrical performance with good efficiency
- Efficiency at Scale: NVIDIA, Energy Leaders Accelerating Power‑Flexible AI Factories to Fortify the Grid
- National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources
- Why we spent 50+ hours retesting Intel’s Core Ultra 270K Plus and 250K Plus
- Benchmarking Nvidia's RTX Neural Texture Compression tech that can reduce VRAM usage by over 80%
Informational only. No financial advice. Do your own research.