
In Intel's own testing, it compared its variant A and variant B texture compression using BC1, against an industry standard compression format using a 3xBC1 plus 1xBC3 format. Variant A achieved over a 9x compression ratio, and variant B an 18x compression ratio over the aforementioned industry standard format, which was only capable of a 4.8x compression ratio.
Intel's new texture compression tech is achieving almost the same compression ratios as Nvidia's own neural texture compression using variant B. It still remains to be seen whether Nvidia or Intel's solution provides better quality, but Intel is the only one of the three major Western GPU manufacturers to have a solution that works on graphics cards besides its own (for now).
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Aaron Klotz is a contributing writer for Tom\u2019s Hardware, covering news related to computer hardware such as CPUs, and graphics cards. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-20/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Aaron Klotz Social Links Navigation Contributing Writer Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.
JarredWaltonGPU This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We also need to see the resulting quality and performance. Anyway, I keep waiting to see widespread use of NTC or cooperative vectors… and I've been waiting for over a year now. The fact is, there are many GPUs that could have benefited greatly from better texture compression YEARS ago (looking at you, all those NVIDIA 8GB GPUs…) The pessimist in me says we'll finally start to see it as a relatively common feature in game engines and games in about five years. Reply
usertests JarredWaltonGPU said: This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We also need to see the resulting quality and performance. Anyway, I keep waiting to see widespread use of NTC or cooperative vectors… and I've been waiting for over a year now. The fact is, there are many GPUs that could have benefited greatly from better texture compression YEARS ago (looking at you, all those NVIDIA 8GB GPUs…) The pessimist in me says we'll finally start to see it as a relatively common feature in game engines and games in about five years. That Xbox Helix presentation advertising neural texture compression associates it with next-gen consoles. PS6 should be using same/similar RDNA5 as Helix, and almost guaranteed to be using NTC. Doing this will free up memory to spend on things like NPC LLM dialogue, if they so choose. So it could appear in games starting as soon as new consoles launch around 2027-2028. But it won't become mandatory in most new AAA titles for a while, maybe your 5 year pessimistic guess. Though if Intel has figured out how to make it work on older GPUs without tensor/XMX, that could help. It could prolong the lifespan of RTX 5060, 9060 XT 8 GB and similar cards even as the industry moves on from 8 GB. It could also lower game install sizes, which would be helpful now that SSD prices are skyrocketing. It just won't be helping anyone in 2026. Reply
rluker5 JarredWaltonGPU said: This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We also need to see the resulting quality and performance. Anyway, I keep waiting to see widespread use of NTC or cooperative vectors… and I've been waiting for over a year now. The fact is, there are many GPUs that could have benefited greatly from better texture compression YEARS ago (looking at you, all those NVIDIA 8GB GPUs…) The pessimist in me says we'll finally start to see it as a relatively common feature in game engines and games in about five years. That's why: " The GPU maker also announced it will have two versions of the tech for different hardware, similar to XeSS. One will be tuned for its XMX engine while the other will be designed to run on traditional CPU and GPU cores at the expense of performance." Intel will be making 2 versions. One for Arc, and one for GPUs like the 3080, 5090, 6800, 9070XT that aren't getting the bespoke latest software packages artificially restricted to the newest architectures. Kind of like XeSS Dp4A, FSR, FSR frame gen. If Intel gets it in a couple games Nvidia won't sit by being one upped by some little peanut, it will be in more games. Hopefully the techniques are similar enough that swapping methods like the ones we have for upscalers are possible. Reply
JarredWaltonGPU rluker5 said: That's why: " The GPU maker also announced it will have two versions of the tech for different hardware, similar to XeSS. One will be tuned for its XMX engine while the other will be designed to run on traditional CPU and GPU cores at the expense of performance." Intel will be making 2 versions. One for Arc, and one for GPUs like the 3080, 5090, 6800, 9070XT that aren't getting the bespoke latest software packages artificially restricted to the newest architectures. Kind of like XeSS Dp4A, FSR, FSR frame gen. If Intel gets it in a couple games Nvidia won't sit by being one upped by some little peanut, it will be in more games. Hopefully the techniques are similar enough that swapping methods like the ones we have for upscalers are possible. Yes, but it's still Intel tech. Why would the ~90% of users with NVIDIA GPUs want to use a potentially inferior algorithm that runs worse on their GPUs than NVIDIA's NTC? That's the mindset most game devs will have, mind you, not my own take on things. It's why DLSS gets used more than FSR and XeSS combined. We see the same thing with existing XeSS, incidentally. DLSS looks best for upscaling, then maybe FSR4, then XeSS 2.x (possibly 1.3), then XeSS 2/1.3 running in DP4a mode, then earlier XeSS (maybe), then FSR3/2 (again, maybe), then FSR1, and then junk like Unreal Engine's temporal upscaling — seriously, it looks terrible and ghosts everywhere. XeSS in DP4a does work on non-Intel, but DLSS works better on NVIDIA and FSR4 works better on AMD. Being a distant third in market share (still less than 1%, I think?) makes Intel's GPU tech a very difficult sell to developers. To gain market share and technology adoption, you can't just be similar to NVIDIA; you have to be provably better than NVIDIA, by a large amount (meaning, not just ~10% better). Otherwise, the game devs will pick NVIDIA first (unless paid to adopt competing techs from AMD/Intel). It's the natural result of having a virtual monopoly. And unfortunately, AMD and Intel haven't proven to be capable of beating NVIDIA; at best, they might match NVIDIA. Thus the cycle continues…. Reply
Gururu Why are we complaining that Intel is offering something to Arc users? Sounds good to me and adds value to my purchase. Reply
usertests JarredWaltonGPU said: To gain market share and technology adoption, you can't just be similar to NVIDIA; you have to be provably better than NVIDIA, by a large amount (meaning, not just ~10% better). Otherwise, the game devs will pick NVIDIA first (unless paid to adopt competing techs from AMD/Intel). It's the natural result of having a virtual monopoly. And unfortunately, AMD and Intel haven't proven to be capable of beating NVIDIA; at best, they might match NVIDIA. Thus the cycle continues…. In this case Nvidia has bucked their usual trend and open sourced their implementation: https://github.com/NVIDIA-RTX/RTXNTC The oldest GPUs that the NTC SDK functionality has been validated on are NVIDIA GTX 1000 series, AMD Radeon RX 6000 series, Intel Arc A series. The Neural Texture Compression train is full steam ahead with no vendor lock-in required. Reply
JarredWaltonGPU usertests said: In this case Nvidia has bucked their usual trend and open sourced their implementation: https://github.com/NVIDIA-RTX/RTXNTC The Neural Texture Compression train is full steam ahead with no vendor lock-in required. Yeah, that's the weird bit. NTC uses cooperative vectors and should in theory be part of DirectX now. So why are no games even bothering to try using it? Like, not a single one so far. Makes me wonder if there are some unexpected issues, or if there's something else that's slowing the attempted adoption. Reply
rluker5 JarredWaltonGPU said: Yes, but it's still Intel tech. Why would the ~90% of users with NVIDIA GPUs want to use a potentially inferior algorithm that runs worse on their GPUs than NVIDIA's NTC? That's the mindset most game devs will have, mind you, not my own take on things. It's why DLSS gets used more than FSR and XeSS combined. What do you think the odds are that neural compression makes it back to the 5000, 4000, 3000, 2000 series, respectively? 50%? When frame gen came out for Nvidia GPUs 90% of them didn't get it. When multi frame gen came out, again 90% didn't get it. How about FSR4? I have a 3080, 9070XT and a B580 and I'm guessing my likeliest scenario of using neural compression with the AMD and Nvidia card is through something Intel has put in there. I'll be happier if Nvidia and AMD have their own implementations for my cards because it would probably be better, but I'm not counting on it. Those companies have money to make. And DLSS was used more than FSR and XeSS combined before those other 2 were widespread, but not last year: oKbYwg3qLJo:555 View: https://youtu.be/oKbYwg3qLJo?t=555 JarredWaltonGPU said: To gain market share and technology adoption, you can't just be similar to NVIDIA; you have to be provably better than NVIDIA, by a large amount (meaning, not just ~10% better). Otherwise, the game devs will pick NVIDIA first (unless paid to adopt competing techs from AMD/Intel). It's the natural result of having a virtual monopoly. And unfortunately, AMD and Intel haven't proven to be capable of beating NVIDIA; at best, they might match NVIDIA. Thus the cycle continues…. Intel also has to appease their large laptop base. With the limited bandwidth of iGPUs Intel has more to gain by implementing neural compression and they do have a reasonably sized userbase and Nvidia is currently locked out of the iGPU market. Intel has really made a move there the last couple of years: gkEiRcxA2kM:887 View: https://youtu.be/gkEiRcxA2kM?t=887 It would be nice to know the percentages of game sales and Gamepass subscriptions go to iGPU vs dGPU laptops to determine if it is worth developer's while to get the Intel implementations in their games. And I'm just using those videos for the pictures at the time I selected but they are informative. Reply
bit_user The article said: … GPUs without dedicated AI cores … Uh, Tensor cores are only "cores" in the very loosest sense. They're even less decoupled than the integer & floating point pipelines of a x86 CPU! The just like the x86-64 floating point unit, Tensor "cores" get instructions from a the stream that's used for the regular CUDA core. In the x86-64 case, the FPU has its own registers. But, in the case of Tensor cores, they operate on the CUDA registers, which is why I say they're less decoupled. I don't know how Intel's XMX cores compare, in this sense. I think they probably still get instruction dispatch from a shared instruction stream, but I don't know if they have a dedicated register file. Anyway, without hardware acceleration of matrix or tensor operations, it's probably impractical to use neural texture compression as a drop-in replacement for standard textures. As the article said, you might be limited to using it just at install or loading time. Reply
bit_user JarredWaltonGPU said: This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We need Direct3D to abstract neural texture compression in a similar way as it's done for the conventional block-based methods. JarredWaltonGPU said: I keep waiting to see widespread use of NTC or cooperative vectors… Cooperative vectors is just a low-level mechanism, inside HLSL, that's needed for doing tensor or matrix ops in generic shader code. It's only a prerequisite for game devs doing tensor ops in their own shaders. Not really for pre-packaged libraries. Otherwise, Nvidia wouldn't have been able to do DLSS or use tensor cores in various stages of ray tracing. usertests said: It could prolong the lifespan of RTX 5060, 9060 XT 8 GB and similar cards even as the industry moves on from 8 GB. If I understand RDNA 4 correctly, the problem with using NTC on something like a RX 9060 XT might be that it ties up too many shader compute resources. So, using it might not be worth the tradeoff in frame rates. For as closely coupled as Tensor and CUDA cores are, I think RDNA 4's WMMA hardware is even more closely intertwined – to the point of really not being viewable as a separate engine. In other words, I think execution time of WMMA and normal vector operations are zero sum. JarredWaltonGPU said: So why are no games even bothering to try using it? Like, not a single one so far. Makes me wonder if there are some unexpected issues, or if there's something else that's slowing the attempted adoption. It affects their development pipeline and assets. To use it, game developers have to update the development workflow and add NTC-compressed textures and models to their distributable packages. Furthermore, it's only practical to use leading edge hardware and requires a separate QA pass, just to make sure it doesn't break anything or crash your 1% lows. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/pc-components/gpus/SPONSORED_LINK_URL
- https://www.tomshardware.com/pc-components/gpus/intel-introduces-its-own-neural-compression-technology-with-a-fallback-mode-that-works-on-gpus-without-dedicated-ai-cores-early-performance-is-on-the-level-of-nvidia-ntc#main
- https://www.tomshardware.com
- Bryson DeChambeau to use 3D-printed 5-iron at 2026 Masters in golfing first — club he fabricated himself 'finally ready' to face Augusta
- Amazon, Microsoft, and Google under investor pressure to disclose site-specific data center water and power consumption — more than a dozen shareholders ask for
- Broadcom to supply Anthropic with 3.5 gigawatts of Google TPU capacity from 2027 — Claude pioneer says its annual revenue run rate has passed $30 billion
- GTC Spotlights NVIDIA RTX PCs and DGX Sparks Running Latest Open Models and AI Agents Locally
- AMD reveals $899 price tag for Ryzen 9 9950X3D2 — first dual-cache X3D CPU is $200 more expensive than the Ryzen 9 9950X3D
Informational only. No financial advice. Do your own research.