
That said, if a card has an extremely low baseline frame rate in the GPU Hierarchy, upscaling isn’t going to magically transform it into a speed demon. Doubling or tripling a low frame rate can still result in only a borderline level of performance on the other end. Really old or really slow cards might not even have enough spare compute resources to run an upscaler in addition to the basic render loop at all.
Frame generation is the other modern marvel of gaming performance, but we’re also excluding it from our hierarchy data. Unlike with upscaling, turning on framegen has real costs. It usually introduces a large input latency penalty, and if that penalty is large enough to exceed an acceptable threshold, it has to be compensated for elsewhere, whether through changing upscaling or quality settings, and that in turn can compromise image quality.
In short, just because a card is producing a large number of output frames with framegen enabled, it doesn’t mean it’s providing a playable or enjoyable experience. We view frame generation as a cherry on top of an already solid gaming experience, not a fundamental method of achieving good baseline performance, and so it has no place in our hierarchy testing.
With limited exceptions, we rely on our own custom benchmark sequences captured directly from gameplay using Nvidia’s FrameView utility rather than scripted benchmarks . Sitting back and watching a non-interactive, disembodied camera float through a scene at a fixed rate of motion might be perfectly repeatable, but that doesn’t capture how it “feels” to play a given game on a given graphics card and system. That’s a function of low input latency and smooth frame delivery. To meaningfully comment on those matters requires trained eyes on a monitor and hands on the mouse and keyboard, full stop.
Furthermore, a scripted benchmark might not even be representative of performance in a title’s core gameplay activity, whether that’s running across a battlefield and shooting bad guys, driving around the Nurburgring, or scrolling across a map in a 4X title. Those activities might be more boring than a free camera swooping through a scripted battle, but if that’s what the player is going to experience directly, that’s what we want to measure.
Limiting ourselves to games with built-in benchmarks also ties our hands in the event that a major title doesn’t have one. We don’t want to let that stand in the way of commenting on performance from a hit or influential title.
This is by far the most time- and labor-intensive way to benchmark gaming performance, but it means you can trust that all of the output of our cards under test has been evaluated by expert human eyes, not just generated blindly from an automated run, transferred from a log file into a spreadsheet, and regurgitated without further inquiry. When we say a graphics card is fast, smooth, and responsive, we know it and mean it.
We choose in-game benchmark sequences of about 60 seconds in length based on our years of experience as members of the media and as part of game testing labs at large GPU companies. We want a scene to show as many elements of a game’s eye candy as possible, from shadows and reflections to complex geometry to objects and terrain near and far. Blank walls occupying the entire viewport need not apply.
We try to spend enough time playing each game we choose to test to understand what constitutes a light, average, and demanding scene for performance, and choose scenes that are representative of the key experience a player is likely to see, rather than a worst-case scenario that might only represent a small portion of a game’s playtime.
In the event we find a performance or rendering issue with a popular game on certain hardware, we can also hold GPU vendors’ feet to the fire to make sure that it’s flagged and fixed. This used to be a rare occurrence, but as more and more corporate resources get dedicated to AI accelerators and software development at GPU vendors that might have formerly been dedicated to gaming drivers and QA, we want to keep an eagle eye out.
Choosing the games that make up the overall performance picture for our hierarchy involves a lot of trade-offs. We’d love to test every single game on the market on every graphics card that still works with modern PCs, but we only have so much time.
First and foremost, we want to make sure that we’re testing titles that gamers are actually playing right now and would likely motivate a purchase or upgrade.
To guide our title choices, we first turn to publicly available statistics like Steam Charts to see which games have the largest player bases and which ones are sustaining their popularity over time. We also consider the general buzz from the games press and gaming community.
If a game is a technical tour-de-force that helps us exercise particular architectural features or resources of a graphics card, whether that’s from a particularly demanding ray-tracing implementation or a hunger for VRAM, we might include it despite its relative popularity, but we try not to let those editors’ picks dominate our lineup.
Most of today’s games are built atop engines that support DirectX 12 . A handful of popular titles still rely on DirectX 11 and Vulkan , but we don’t go out of our way to include a disproportionate number of those titles compared to how frequently game studios choose to target those APIs with their projects.
Similarly, more and more of today’s biggest games are built on Unreal Engine 5 , but as long as player stats suggest it makes sense to do so, we try to include a diverse set of engines to see whether certain GPU architectures handle the demands of one engine better than another. Overall, Unreal Engine 5 games make up a little less than half of our test suite, and we feel like that’s a fair mix given the current state of the market.
We’re continuing to split our performance results between raster-only tests and those with RT enabled. The bulk of our data will continue to come from those raster-only tests, but we’ve already gotten a glimpse at some 2026 releases, such as Pragmata , that deploy RT to gorgeous effect, and we’ll likely rotate out some older RT titles and include new ones as the year progresses.
Our first-half 2026 results for the GPU Hierarchy will include data from the following raster games, at a minimum:
One of the world's most popular PC games, period
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/pc-components/gpus/SPONSORED_LINK_URL
- https://www.tomshardware.com/pc-components/gpus/the-great-bench-gpu-retest-begins-how-were-testing-for-our-gpu-hierarchy-in-2026-and-why-upscaling-and-framegen-are-still-out#main
- https://www.tomshardware.com
- Mercedes-Benz Unveils New S-Class Built on NVIDIA DRIVE AV, Which Enables an L4-Ready Architecture
- Data center developers building private natural gas 'Shadow Grid' power plants to sidestep strained grids — off-grid GW Ranch project in Texas will reportedly u
- Lenovo alerts partners to looming price hikes on consumer and server products — soaring memory costs drive the surge
- Several PC games run on Snapdragon 8 Elite Android device with 16GB RAM and emulation — The Witcher 3, Spider-Man: Miles Morales, and Cyberpunk 2077 playable at
- GeForce NOW Celebrates Six Years of Streaming With 24 Games in February
Informational only. No financial advice. Do your own research.