
From a carmaker's point of view, automotive requirements make this cadence easier, not harder: long lifecycles, determinism, and ISO 26262 safety force designs towards very conservative evolution and locked interfaces. Given the overlapping development (multiple generations in flight), vertical integration, and a single internal customer, Tesla could sustain this cadence.
Meanwhile, the 'highest-volume AI chips' clearly suggest that we are dealing with processors meant for chips deployed across millions of vehicles, which is a far higher unit volume than data-center AI accelerators.
Assuming that Musk's Tesla has enough chip designers (which it probably does not, given that calls for applicants in the post), the real bottleneck for the assumed 9-month cycle will be verification, safety cases, and software stability, not silicon design itself.
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Anton Shilov is a contributing writer at Tom\u2019s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-11/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Anton Shilov Social Links Navigation Contributing Writer Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
This actually makes a lot more sense than people think — if you read it as an iteration cadence , not a clean-sheet race against Nvidia. A 9-month cycle is absolutely realistic when you control the full stack , reuse a stable architecture, and operate under automotive constraints that force conservative evolution. ISO 26262, determinism, long lifecycles, and locked interfaces don’t slow iteration — they shape it into predictable, platform-based progress . Nvidia wins by optimizing for peak performance and software dominance in data centers. Tesla is optimizing for deployment at scale , safety-certified silicon, and vertical integration across millions of endpoints. Those are fundamentally different goals, metrics, and bottlenecks. Also worth noting: “highest-volume AI chips” isn’t marketing fluff — it’s math. Shipping a chip into millions of vehicles dwarfs data-center volumes by orders of magnitude, even if each unit is less exotic. The real challenge here isn’t design speed — it’s verification, safety cases, and long-term software stability. If Tesla can keep those pipelines parallelized, this cadence isn’t reckless at all. It’s simply playing a different game on a different axis . Whether it beats Nvidia is the wrong question. The real question is whether anyone else can even attempt this model . Reply
bit_user How does it even make sense to release new hardware generations faster than TSMC or Samsung can bring up new nodes? Reply
bit_user valthuer said: This actually makes a lot more sense than people think — if you read it as an iteration cadence , not a clean-sheet race against Nvidia. Nvidia doesn't start from a clean sheet, either. valthuer said: Tesla is optimizing for deployment at scale, Nvidia deploys at even larger scale! valthuer said: Shipping a chip into millions of vehicles dwarfs data-center volumes by orders of magnitude, even if each unit is less exotic. Nvidia's 2025 revenues were $130.5B. More than 90% of their revenue is from datacenter products. If the average selling price of their datacenter GPUs is $30k, then that's > 4M datacenter GPUs shipped last year. This is about double Tesla's sales volume. So, I'd say Nvidia is easily deploying at the same or greater volume. valthuer said: The real challenge here isn’t design speed — it’s verification, safety cases, and long-term software stability. I disagree. You develop a test suite to perform that verification on one chip. Then, it should be usable, with relatively few changes, on the next generation. valthuer said: The real question is whether anyone else can even attempt this model. Not sure why you worship Elon, but it feels to me like you take his words and then try to spin them as some kind of divine wisdom. I wish you'd cite even one source to back up any of your claims. Reply
bit_user said: Nvidia doesn't start from a clean sheet, either. Nvidia deploys at even larger scale! Nvidia's 2025 revenues were $130.5B. More than 90% of their revenue is from datacenter products. If the average selling price of their datacenter GPUs is $30k, then that's > 4M datacenter GPUs shipped last year. This is about double Tesla's sales volume. So, I'd say Nvidia is easily deploying at the same or greater volume. I disagree. You develop a test suite to perform that verification on one chip. Then, it should be usable, with relatively few changes, on the next generation. Not sure why you worship Elon, but it feels to me like you take his words and then try to spin them as some kind of divine wisdom. I wish you'd cite even one source to back up any of your claims. Fair points — let’s break it down carefully: Iteration vs. clean-sheet : You’re correct, Nvidia doesn’t start from a fully clean sheet each generation. The distinction I was trying to make is that Tesla’s 9-month cadence relies on leveraging a single architecture across multiple vehicle-focused generations , which is a different operational constraint than Nvidia’s data-center-focused cadence. The difference isn’t “clean sheet or not,” it’s how overlapping development, vertical integration, and safety requirements shape the iteration . Deployment scale : While Nvidia ships millions of GPUs, Tesla’s scale is different in nature. Each vehicle’s chip must pass automotive functional safety (ISO 26262) and scenario-based testing , which dramatically increases verification and software complexity compared to a data-center GPU. So “scale” here isn’t just units — it’s units with extremely strict safety-critical guarantees . Verification & software : You’re right that test suites can be reused, but safety-critical systems are less forgiving than datacenter workloads . Even small architectural changes often require full re-certification and scenario-based validation . That’s why the bottleneck shifts from design to verification and safety assurance , not silicon tape-out. Context over hype : I’m not “worshipping” Elon. The point is to highlight that Tesla’s approach is fundamentally different from Nvidia’s , not to claim divine insight. Sources for this reasoning include the ISO 26262 functional safety standards , standard ADAS/autonomous vehicle development practices, and industry reports on Tesla’s chip design cycles (as in the Tom’s Hardware article).In short: it’s not about being bigger or better , it’s about different design constraints and goals . Tesla aims for high-volume, safety-critical, vertically integrated deployment; Nvidia optimizes for peak performance in datacenter environments. Apples and oranges, but both impressive in their own domain. Reply
bit_user valthuer said: Fair points — let’s break it down carefully: You still didn't cite any sources, much less credible ones. Why should anyone believe a single one of your points? Reply
thisisaname It is easy to say but the proof is in the doing on it. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/elon-musk-reveals-roadmap-with-nine-month-cadence-for-new-ai-processor-releases-beating-nvidia-and-amds-yearly-cadence-musk-plans-to-have-the-highest-volume-chips-in-the-world#main
- https://www.tomshardware.com
- This new 'DockFrame' expansion card can transform a PC's USB-C port into swathe of features and functions — Multimeter, Microcontrollers and Mini SSDs among new
- Nvidia accused of trying to cut a deal with Anna’s Archive for high‑speed access to the massive pirated book haul — allegedly chased stolen data to fuel its LLM
- Elon Musk restarts Dojo3 'space' supercomputer project as AI5 chip design gets in 'good shape' — will be first Tesla-built supercomputer to feature all-in-house
- Intel's Ohio One project shows healthy progress as new job listings pop up — construction seems to be well underway as contractor actively hiring for ambitious
- NVIDIA DGX Spark and DGX Station Power the Latest Open-Source and Frontier Models From the Desktop
Informational only. No financial advice. Do your own research.