
hotaru251 yes the future Nvidia wants juts like they wanted RTX to kick off and how long did that take to actually have a handful of titles? Or DLSS 1.0 which was absolute trash? I live in the present and the future has no impact on the now we live in. Reply
thestryker This huge push for shifting the rendering pipeline has everything to do with margins and that's about it. Potentially being able to use less silicon and lower memory capacities on client video cards not only increases margins, but lowers the chance of them being used in place of enterprise parts. It seems to me that if they'd used a fixed size tensor core setup for client parts they'd have more room for raster/RT. Of course this would increase costs most likely and as we've seen the last couple of generations nvidia is hyper focused on margins. Reply
mynamehear The future is going to be wearing a barrel as your 401k and retirement fund us tapped out to the biggest fraud the world has ever seen- Huang and the ai grift Reply
ekio This guy will say anything that raises the stock price and cash of his company. “Stop learning how to code, AI will do it for you” “Stop rendering with cgi, AI will do it better” “Stop buying decently priced gaming gpus, it is only good margin, I love extremely huge margins only, so I prefer to sell them to Microsoft who doesn’t have enough electricity to even turn them on” Ok, the last one he didn’t say it, he just thought it. Reply
bit_user hotaru251 said: I live in the present and the future has no impact on the now we live in. People buy GPUs with the expectation of keeping them for several years. So, one way to read his comments is that he's trying to sell people on buying Nvidia GPUs, since they have such a dominance over AI. If you believe that AI will become so fundamental to games & game performance, then buying a Nvidia card would be the logical outcome. Reply
bit_user thestryker said: This huge push for shifting the rendering pipeline has everything to do with margins and that's about it. Eh, but the DPI on monitors is higher than it used to be. Even going from a 27" 1440p monitor to a 32" 4k monitor left me with much smaller pixels. I absolutely don't need them rendered with the same degree of fidelity, especially at high framerates that are way beyond what monitors were capable of, back when every pixel was lovingly rendered as a special and unique snowflake! The way I look at DLSS is basically just better scaling. I don't need PS5 games rendered at 4k, for my TV, but I sure don't want to see conventional scaling artifacts, either. thestryker said: It seems to me that if they'd used a fixed size tensor core setup for client parts they'd have more room for raster/RT. Of course this would increase costs most likely and as we've seen the last couple of generations nvidia is hyper focused on margins. I don't understand this point. Reply
thestryker bit_user said: I don't understand this point. Tensor cores take up die space and unlike RT cores don't scale on client workloads. A 5090 has over 5x the Tensor cores of a 5060 which doesn't really do anything for it when it comes to gaming. I just assume a design with a fixed Tensor core count would end up being more expensive at the very least design wise than just scaling up the same SM across the board. Reply
bit_user thestryker said: Tensor cores take up die space and unlike RT cores don't scale on client workloads. A 5090 has over 5x the Tensor cores of a 5060 which doesn't really do anything for it when it comes to gaming. But more tensor cores should mean you can run DLSS at higher resolutions & frame rates, in less time? Also, the more they lean on neural rendering (e.g. neural texture compression), the more fundamental Tensor cores will become for gaming performance. Finally I believe Tensor cores are now used in their raytracing pipeline. So, you also want their number to scale with RT cores. Hence, their current architecture pretty much makes sense to me. Sure, the ratio of CUDA to Tensor cores might not be optimal, but I think it's not off by a factor that would massively impact other GPU functions. Reply
thestryker bit_user said: But more tensor cores should mean you can run DLSS at higher resolutions & frame rates, in less time? Maybe? There's no uplift difference between any card so it would be hard to say unless one could read the specific load. bit_user said: Finally I believe Tensor cores are now used in their raytracing pipeline. So, you also want their number to scale with RT cores. It's not part of the RT pipeline itself, but it's used for Ray reconstruction if that's turned on. This isn't a particularly heavy workload and has only been around for a couple of years or so. bit_user said: Also, the more they lean on neural rendering (e.g. neural texture compression), the more fundamental Tensor cores will become for gaming performance. Which is of questionable benefit to people playing games. The only reason they'd keep putting more and more on Tensor cores is if they're not being fully utilized. The Tensor to shader ratio has been the same since the 30 series. Reply
bit_user thestryker said: Maybe? There's no uplift difference between any card so it would be hard to say unless one could read the specific load. You might consider this interesting. It shows the difference in DLSS execution time for specific resolutions, on different GPUs of the same and different generations. https://forums.developer.nvidia.com/t/frame-generation-performance-execution-time/263821There are marked improvements between tiers, within a given generation (as well as between the same tier of different generations). I think that cements the idea that more/better Tensor cores reduce DLSS execution time, thereby enabling higher resolution/framerate combinations. thestryker said: It's not part of the RT pipeline itself, but it's used for Ray reconstruction if that's turned on. This isn't a particularly heavy workload and has only been around for a couple of years or so. How do you know how heavy it is? FWIW, AMD made it clear how much AI can aid in not only ray sampling but also denoising. Source: https://www.techpowerup.com/review/amd-radeon-rx-9070-series-technical-deep-dive/3.html thestryker said: Which is of questionable benefit to people playing games. The only reason they'd keep putting more and more on Tensor cores is if they're not being fully utilized. The Tensor to shader ratio has been the same since the 30 series. I'll admit that I was skeptical of neural texture compression, when Nvidia first started talking about it. However, upon digging into the research, I was quickly convinced of the benefits in reducing the memory footprint & bandwidth requirements of textures. You might be right that Nvidia is only doing it because they have these otherwise idle resources at hand, but that doesn't mean there's not value in harnessing them. While you can argue over Nvidia's die allocation between CUDA cores and Tensor cores, what's a lot harder to argue is the economics of increasing memory capacity and bandwidth. If neural texture compression helps there (and I think it's clear that it does), then it's another way the tensor cores are adding real value in gaming. I actually liked how AMD's approach of trying to harness as much of the shading pipeline as possible, in their WMMA implementation. However, we're at a crossroads where even AMD can't deny how critical AI will be to the future of gaming. Hence, its collaboration with Sony, which we know has already resulted in the creation of a new class of compute unit for both PS6 and next gen AMD GPU: Neural Arrays. https://www.tomshardware.com/video-games/console-gaming/sony-and-amd-tease-likely-playstation-6-gpu-upgrades-radiance-cores-and-a-new-interconnect-for-boosting-ai-rendering-performance Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/pc-components/gpus/SPONSORED_LINK_URL
- https://www.tomshardware.com/pc-components/gpus/nvidia-ceo-jensen-huang-says-the-future-is-neural-rendering-at-ces-2026-teasing-dlss-advancements-rtx-5090-could-represent-the-pinnacle-of-traditional-raster#main
- https://www.tomshardware.com
- Nvidia introduces DLSS 4.5 and Multi Frame Generation 6X at CES 2026 — updated models can generate higher-quality upscaled frames and more of them, dynamically
- AMD Ryzen chief teases return of older Zen 3 chips to fight soaring RAM prices — 'That's something we're actively working on right now'
- CES 2026 Day 0: Nvidia debuts DLSS 4.5, Ryzen 7 9850X3D aims for desktop gaming glory, Intel Panther Lake arrives
- HP’s HyperX Omen 34 gaming monitor delivers V-Stripe QD-OLED tech — HyperX-branded screen boasts 360 Hz QHD panel, KVM, and 100-watt USB-C PD
- Alienware brings OLED to its gaming laptops for the first time in years — anti-glare OLED display boasts 240Hz refresh rate and 0.2ms response time
Informational only. No financial advice. Do your own research.