
In Intel's own testing, it compared its variant A and variant B texture compression using BC1, against an industry standard compression format using a 3xBC1 plus 1xBC3 format. Variant A achieved over a 9x compression ratio, and variant B an 18x compression ratio over the aforementioned industry standard format, which was only capable of a 4.8x compression ratio.
Intel's new texture compression tech is achieving almost the same compression ratios as Nvidia's own neural texture compression using variant B. It still remains to be seen whether Nvidia or Intel's solution provides better quality, but Intel is the only one of the three major Western GPU manufacturers to have a solution that works on graphics cards besides its own (for now).
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Aaron Klotz is a contributing writer for Tom\u2019s Hardware, covering news related to computer hardware such as CPUs, and graphics cards. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-19/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Aaron Klotz Social Links Navigation Contributing Writer Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.
JarredWaltonGPU This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We also need to see the resulting quality and performance. Anyway, I keep waiting to see widespread use of NTC or cooperative vectors… and I've been waiting for over a year now. The fact is, there are many GPUs that could have benefited greatly from better texture compression YEARS ago (looking at you, all those NVIDIA 8GB GPUs…) The pessimist in me says we'll finally start to see it as a relatively common feature in game engines and games in about five years. Reply
usertests JarredWaltonGPU said: This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We also need to see the resulting quality and performance. Anyway, I keep waiting to see widespread use of NTC or cooperative vectors… and I've been waiting for over a year now. The fact is, there are many GPUs that could have benefited greatly from better texture compression YEARS ago (looking at you, all those NVIDIA 8GB GPUs…) The pessimist in me says we'll finally start to see it as a relatively common feature in game engines and games in about five years. That Xbox Helix presentation advertising neural texture compression associates it with next-gen consoles. PS6 should be using same/similar RDNA5 as Helix, and almost guaranteed to be using NTC. Doing this will free up memory to spend on things like NPC LLM dialogue, if they so choose. So it could appear in games starting as soon as new consoles launch around 2027-2028. But it won't become mandatory in most new AAA titles for a while, maybe your 5 year pessimistic guess. Though if Intel has figured out how to make it work on older GPUs without tensor/XMX, that could help. It could prolong the lifespan of RTX 5060, 9060 XT 8 GB and similar cards even as the industry moves on from 8 GB. It could also lower game install sizes, which would be helpful now that SSD prices are skyrocketing. It just won't be helping anyone in 2026. Reply
rluker5 JarredWaltonGPU said: This is all well and good, but what we really need is for games to actually use these technologies. And considering market share, that's a tough row to hoe for Intel with game devs. We also need to see the resulting quality and performance. Anyway, I keep waiting to see widespread use of NTC or cooperative vectors… and I've been waiting for over a year now. The fact is, there are many GPUs that could have benefited greatly from better texture compression YEARS ago (looking at you, all those NVIDIA 8GB GPUs…) The pessimist in me says we'll finally start to see it as a relatively common feature in game engines and games in about five years. That's why: " The GPU maker also announced it will have two versions of the tech for different hardware, similar to XeSS. One will be tuned for its XMX engine while the other will be designed to run on traditional CPU and GPU cores at the expense of performance." Intel will be making 2 versions. One for Arc, and one for GPUs like the 3080, 5090, 6800, 9070XT that aren't getting the bespoke latest software packages artificially restricted to the newest architectures. Kind of like XeSS Dp4A, FSR, FSR frame gen. If Intel gets it in a couple games Nvidia won't sit by being one upped by some little peanut, it will be in more games. Hopefully the techniques are similar enough that swapping methods like the ones we have for upscalers are possible. Reply
JarredWaltonGPU rluker5 said: That's why: " The GPU maker also announced it will have two versions of the tech for different hardware, similar to XeSS. One will be tuned for its XMX engine while the other will be designed to run on traditional CPU and GPU cores at the expense of performance." Intel will be making 2 versions. One for Arc, and one for GPUs like the 3080, 5090, 6800, 9070XT that aren't getting the bespoke latest software packages artificially restricted to the newest architectures. Kind of like XeSS Dp4A, FSR, FSR frame gen. If Intel gets it in a couple games Nvidia won't sit by being one upped by some little peanut, it will be in more games. Hopefully the techniques are similar enough that swapping methods like the ones we have for upscalers are possible. Yes, but it's still Intel tech. Why would the ~90% of users with NVIDIA GPUs want to use a potentially inferior algorithm that runs worse on their GPUs than NVIDIA's NTC? That's the mindset most game devs will have, mind you, not my own take on things. It's why DLSS gets used more than FSR and XeSS combined. We see the same thing with existing XeSS, incidentally. DLSS looks best for upscaling, then maybe FSR4, then XeSS 2.x (possibly 1.3), then XeSS 2/1.3 running in DP4a mode, then earlier XeSS (maybe), then FSR3/2 (again, maybe), then FSR1, and then junk like Unreal Engine's temporal upscaling — seriously, it looks terrible and ghosts everywhere. XeSS in DP4a does work on non-Intel, but DLSS works better on NVIDIA and FSR4 works better on AMD. Being a distant third in market share (still less than 1%, I think?) makes Intel's GPU tech a very difficult sell to developers. To gain market share and technology adoption, you can't just be similar to NVIDIA; you have to be provably better than NVIDIA, by a large amount (meaning, not just ~10% better). Otherwise, the game devs will pick NVIDIA first (unless paid to adopt competing techs from AMD/Intel). It's the natural result of having a virtual monopoly. And unfortunately, AMD and Intel haven't proven to be capable of beating NVIDIA; at best, they might match NVIDIA. Thus the cycle continues…. Reply
Gururu Why are we complaining that Intel is offering something to Arc users? Sounds good to me and adds value to my purchase. Reply
usertests JarredWaltonGPU said: To gain market share and technology adoption, you can't just be similar to NVIDIA; you have to be provably better than NVIDIA, by a large amount (meaning, not just ~10% better). Otherwise, the game devs will pick NVIDIA first (unless paid to adopt competing techs from AMD/Intel). It's the natural result of having a virtual monopoly. And unfortunately, AMD and Intel haven't proven to be capable of beating NVIDIA; at best, they might match NVIDIA. Thus the cycle continues…. In this case Nvidia has bucked their usual trend and open sourced their implementation: https://github.com/NVIDIA-RTX/RTXNTC The oldest GPUs that the NTC SDK functionality has been validated on are NVIDIA GTX 1000 series, AMD Radeon RX 6000 series, Intel Arc A series. The Neural Texture Compression train is full steam ahead with no vendor lock-in required. Reply
JarredWaltonGPU usertests said: In this case Nvidia has bucked their usual trend and open sourced their implementation: https://github.com/NVIDIA-RTX/RTXNTC The Neural Texture Compression train is full steam ahead with no vendor lock-in required. Yeah, that's the weird bit. NTC uses cooperative vectors and should in theory be part of DirectX now. So why are no games even bothering to try using it? Like, not a single one so far. Makes me wonder if there are some unexpected issues, or if there's something else that's slowing the attempted adoption. Reply
rluker5 JarredWaltonGPU said: Yes, but it's still Intel tech. Why would the ~90% of users with NVIDIA GPUs want to use a potentially inferior algorithm that runs worse on their GPUs than NVIDIA's NTC? That's the mindset most game devs will have, mind you, not my own take on things. It's why DLSS gets used more than FSR and XeSS combined. What do you think the odds are that neural compression makes it back to the 5000, 4000, 3000, 2000 series, respectively? 50%? When frame gen came out for Nvidia GPUs 90% of them didn't get it. When multi frame gen came out, again 90% didn't get it. How about FSR4? I have a 3080, 9070XT and a B580 and I'm guessing my likeliest scenario of using neural compression with the AMD and Nvidia card is through something Intel has put in there. I'll be happier if Nvidia and AMD have their own implementations for my cards because it would probably be better, but I'm not counting on it. Those companies have money to make. And DLSS was used more than FSR and XeSS combined before those other 2 were widespread, but not last year: oKbYwg3qLJo:555 View: https://youtu.be/oKbYwg3qLJo?t=555 JarredWaltonGPU said: To gain market share and technology adoption, you can't just be similar to NVIDIA; you have to be provably better than NVIDIA, by a large amount (meaning, not just ~10% better). Otherwise, the game devs will pick NVIDIA first (unless paid to adopt competing techs from AMD/Intel). It's the natural result of having a virtual monopoly. And unfortunately, AMD and Intel haven't proven to be capable of beating NVIDIA; at best, they might match NVIDIA. Thus the cycle continues…. Intel also has to appease their large laptop base. With the limited bandwidth of iGPUs Intel has more to gain by implementing neural compression and they do have a reasonably sized userbase and Nvidia is currently locked out of the iGPU market. Intel has really made a move there the last couple of years: gkEiRcxA2kM:887 View: https://youtu.be/gkEiRcxA2kM?t=887 It would be nice to know the percentages of game sales and Gamepass subscriptions go to iGPU vs dGPU laptops to determine if it is worth developer's while to get the Intel implementations in their games. And I'm just using those videos for the pictures at the time I selected but they are informative. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/pc-components/gpus/SPONSORED_LINK_URL
- https://www.tomshardware.com/pc-components/gpus/intel-introduces-its-own-neural-compression-technology-with-a-fallback-mode-that-works-on-gpus-without-dedicated-ai-cores-early-performance-is-on-the-level-of-nvidia-ntc#main
- https://www.tomshardware.com
- Intel's OEM-only Bartlett Lake CPU modded to run on consumer Z790 motherboard beats AMD's Ryzen 9 9900X3D in Cinebench multi-core test — Core 9 273QPE has 12 co
- Unprecedented $450 price slash brings 27-inch 1440p 280 Hz OLED gaming monitor down to $399 — LG UltraGear OLED 27GX700A-B now at an unbeatable Price
- Nvidia AI tech claims to slash gaming GPU memory usage by 85% with zero quality loss — Neural Texture Compression demo reveals stunning visual parity between 6.
- Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community
- Hobbyist builds a homebrew Intel 8086 ISA accelerator card — maker’s project improves integer multiplication on these retro systems by 250%
Informational only. No financial advice. Do your own research.