
Luke James is a freelance writer and journalist.\u00a0 Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.\u00a0 ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-22/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Luke James Social Links Navigation Contributor Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
IntelUser2000 GPU based on open source? I'm all for it. But don't expect a 12nm chip to be significantly faster if it moves to say 5nm. When people say Moore's Law is dying, they mean that traditional just porting to new node and benefitting era is gone. It requires significant work to get the full advantages of a new node. Nvidia/AMD has already done the hard work and know this. It's very unlikely a startup can take full advantage of the newer nodes like established players do. Even if you have a potential winner, the actual winner is determined by the details, such as optimizing and fine tuning the thing. And of course timing is important as fine tuning and optimization is potentially half the work, and can throw off your schedule. Oftentimes the "traditional" ones continue on because they keep making new products like clockwork, whereas a startup's advantages are lost due to delays, which Bolt graphics is already suffering with the delay to 2027. Reply
bit_user The article seems to lack a source link. I'm guessing it's based on this press release: https://www.prnewswire.com/news-releases/bolt-graphics-completes-tape-out-of-test-chip-for-its-high-performance-zeus-gpu-a-major-milestone-in-reducing-computing-costs-by-17x-302750442.html IntelUser2000 said: GPU based on open source? I'm all for it. Huh? Specifically what is the "open source" claim and where did you see it? IntelUser2000 said: But don't expect a 12nm chip to be significantly faster if it moves to say 5nm. Nvidia's RTX 2000 was built on a 12nm node; RTX 4000 was built on a N5-family node. Want to guess which one is faster? It's obviously not going to be the exact same chip on N5 that they're taping out out 12 nm. The 5 nm version will certainly be scaled up and running at higher clock speeds. IntelUser2000 said: Even if you have a potential winner, the actual winner is determined by the details, such as optimizing and fine tuning the thing. And of course timing is important as fine tuning and optimization is potentially half the work, and can throw off your schedule. Oftentimes the "traditional" ones continue on because they keep making new products like clockwork, whereas a startup's advantages are lost due to delays, which Bolt graphics is already suffering with the delay to 2027. Yeah, time-to-market is a serial killer of promising chip startups. By the time Bolt gets a chip to market that's on a remotely competitive node, they'll have missed their market window. Even if their current tech is competitive against a RTX 5090, their final production silicon will have to face a RTX 6090 (or later). Same story repeats itself over and over. That's why it's so hard to displace the big CPU and GPU makers. Reply
thestryker bit_user said: Yeah, time-to-market is a serial killer of promising chip startups. By the time Bolt gets a chip to market that's on a remotely competitive node, they'll have missed their market window. Even if their current tech is competitive against a RTX 5090, their final production silicon will have to face a RTX 6090 (or later). Honestly they're fine (with this timeline) for the market they're aiming at. There's no chance nvidia is going to design anything to compete as long as the ai money is flowing. I think the biggest problem they're facing (at least on the non-HPC markets) is going to be the software side of things. For example with rendering it wouldn't matter if they were double the speed of whatever is released at the time without rock solid software. None of this is to say that sooner isn't better, because it absolutely is, just that their niche is pretty safe as long as they don't stumble. Reply
bit_user thestryker said: Honestly they're fine for the market they're aiming at. There's no chance nvidia is going to design anything to compete as long as the ai money is flowing. Nvidia will eventually release RTX 6090 (and the corresponding workstation cards). Contrary to popular belief, Nvidia hasn't stopped development on non-AI software. Last year, they updated their non-realtime raytracing library to take advantage of the Shader Execution Reordering (SER) in the RTX 5090: https://developer.nvidia.com/rtx/ray-tracing/optix Reply
thestryker bit_user said: Nvidia will eventually release RTX 6090 (and the corresponding workstation cards). Do you really think the 6090 is going to have a minimum double the RT capability of the 5090? I sure don't. There's too many other things that nvidia needs their parts to do that Bolt isn't even looking at. Reply
bit_user thestryker said: Do you really think the 6090 is going to have a minimum double the RT capability of the 5090? I sure don't. Sure, I could see they pulling out another 2x in RT performance, between efficiency enhancements, frequency, and more SMs. Don't forget that they'll be on a new node. Also, they're continuing to enhance ReSTIR: https://research.nvidia.com/labs/rtr/publication/lin2026restirptenhanced/ thestryker said: There's too many other things that nvidia needs their parts to do that Bolt isn't even looking at. They're a big company. They can walk and chew gum, at the same time. Now that they have Groq, they aren't as dependent on their client GPUs for inference acceleration. https://www.nvidia.com/en-us/data-center/lpx/ Reply
thestryker bit_user said: Also, they're continuing to enhance ReSTIR Yeah I know that's why I said 2x instead of more. bit_user said: Sure, I could see they pulling out another 2x in RT performance, between efficiency enhancements, frequency, and more SMs. Don't forget that they'll be on a new node. That would be a first since Turing to Ampere, but I like your optimism. Reply
bit_user thestryker said: That would be a first since Turing to Ampere, but I like your optimism. No, both Ada and Blackwell had further improvements to the RT cores. That's why Nvidia still leads on RT performance, in spite of all RDNA4's improvements, here. I'm sure they're not done, yet. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/semiconductors/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/semiconductors/bolt-graphics-tapes-out-its-first-zeus-gpu-test-chip-on-tsmc-12nm#main
- https://www.tomshardware.com/subscription
- Save a huge $460 on this RTX 5080 gaming PC with a 9800X3D, 32GB of DDR5 RAM, and a 2TB SSD — $3,039.99 sale price nets you this Newegg ABS pre-built powerhouse
- Linux may be ending support for older network drivers due to influx of false AI-generated bug reports — maintenance has become too burdensome for old largely-un
- Showstopper Build: Greyscale — building a custom-looped ITX PC that pushes the form factor to its limits
- Microsoft announces surprise Xbox Game Pass price cuts, ends day one Call of Duty inclusion — Ultimate down to $22.99 while PC Game Pass drops to $13.99
- Congress moves to strip the DoC of chip-export discretion with the MATCH Act — DUV lithography machines among those targeted in chipmaking tool crackdown
Informational only. No financial advice. Do your own research.