There's been plenty going on behind the scenes here at GD, with one of these being GD's own Memory and Shader Performance Meters. Here I'm going to run through what they are, how they work, and how they're going to benefit your PC gaming experience.
The new Memory Performance Meter gives you an idea of how increasing the screen resolution affects the gaming performance and VRAM demands of your graphics card, while the Shader Performance Meter tells you how good your graphics card is at processing frames, excluding the resolution at which you are gaming.
Generally, you'll see AMD's graphics cards will have a better Memory Performance Score, as they almost always offer a superior memory configuration (additional VRAM), while NVIDIA's graphics cards will shine mostly at the Shader Performance Meter.
The difference between these two manufacturers has been increasing over the years and will continue to do so. AMD has been putting all its money into graphics cards with superior memory configurations, to boost image improving techniques and support higher gaming resolutions, at the cost of greater power consumption. NVIDIA has been continuously redesigning its GPU shaders, making them both more powerful and energy efficient.
The most important of these two meters is the Shader Performance Meter. The better the Shader Performance, the better the graphics card is at rendering all the eye candy. The Memory Performance just needs to be adequate enough to make that possible, at any given resolution.
What this means is that a graphics card with very high Memory Performance and low Shader Performance performs worse than a graphics card with the opposite configuration, particularly at lower resolutions.
A graphics card's overall performance is only as good as the GPU itself. So having a very large frame buffer or a very large memory channel is useless if the GPU is not up to the task. This is one of the main reasons why AMD's Radeon R9 Fury X, equipped with HBM Memory and delivering 512GB/s of Memory Bandwidth, is slower than NVIDIA's GeForce GTX 980 Ti at anything below 4K.
Below are two good examples of what this means:
GeForce GTX 950 VS Radeon R7 370
The GeForce GTX 950 is significantly faster than the Radeon R7 370 because its Shader Performance is superior to the Radeon R7 370's, even though the latter shines much brighter in terms of Memory Performance.
The Radeon R7 370 having better Memory Performance basically means that as the resolution increases, performance decreases at a lesser rate than the GeForce GTX 950, although this doesn't necessarily mean the Radeon R7 370 is faster at 4K.
GeForce GTX 660 VS GeForce GTX 295
We couldn't exclude this example. Obviously 2009's GeForce GTX 295 has an ageing architecture and doesn't benefit from all the driver updates that the GeForce GTX 660 still does, but it's clear the GeForce GTX 295's 50% superior Memory Bandwidth proves useless in most of today's gaming scenarios because of its weak Shader Performance. Let's take a look at the Resolution Performance Scores:
GeForce GTX 660 VS GeForce GTX 295
Notice how the GeForce GTX 660's scores decrease at a higher rate than the GeForce GTX 295's do, although they both offer similar performance at 2560x1440. However, the latter is a resolution to which neither of the graphics cards are suited - both should only be used, at most, for 1080p gaming and in that case, the GeForce GTX 660 clearly wins. As the resolution decreases, GeForce GTX 660's winning margins widens and at 720p they both should provide Ultra visuals easily.
We are also developing two other Meters: Operating Temperature and Fan Noise Meters. Whereas the Operating Temperature Meter will give you an idea of how cool your graphics card runs compared to others, the Fan Noise Meter will let you know just how silent your graphics cards is. We'll keep you posted when those new tools are ready.
Excited about it? What do you think of the Memory and Shader Performance Meters? Does your graphics card have better Memory or Shader Performance? Let us know below!