New GD Feature: Graphics Card Memory and Shader Performance Meters

Written by Paulo Proenca on Wed, Oct 21, 2015 3:00 PM

There's been plenty going on behind the scenes here at GD, with one of these being GD's own Memory and Shader Performance Meters. Here I'm going to run through what they are, how they work, and how they're going to benefit your PC gaming experience. 

The new Memory Performance Meter gives you an idea of how increasing the screen resolution affects the gaming performance and VRAM demands of your graphics card, while the Shader Performance Meter tells you how good your graphics card is at processing frames, excluding the resolution at which you are gaming.

Generally, you'll see AMD's graphics cards will have a better Memory Performance Score, as they almost always offer a superior memory configuration (additional VRAM), while NVIDIA's graphics cards will shine mostly at the Shader Performance Meter

The difference between these two manufacturers has been increasing over the years and will continue to do so. AMD has been putting all its money into graphics cards with superior memory configurations, to boost image improving techniques and support higher gaming resolutions, at the cost of greater power consumption. NVIDIA has been continuously redesigning its GPU shaders, making them both more powerful and energy efficient.

The most important of these two meters is the Shader Performance Meter. The better the Shader Performance, the better the graphics card is at rendering all the eye candy. The Memory Performance just needs to be adequate enough to make that possible, at any given resolution.

What this means is that a graphics card with very high Memory Performance and low Shader Performance performs worse than a graphics card with the opposite configuration, particularly at lower resolutions.

A graphics card's overall performance is only as good as the GPU itself. So having a very large frame buffer or a very large memory channel is useless if the GPU is not up to the task. This is one of the main reasons why AMD's Radeon R9 Fury X, equipped with HBM Memory and delivering 512GB/s of Memory Bandwidth, is slower than NVIDIA's GeForce GTX 980 Ti at anything below 4K.

Below are two good examples of what this means:

GeForce GTX 950 VS Radeon R7 370

The GeForce GTX 950 is significantly faster than the Radeon R7 370 because its Shader Performance is superior to the Radeon R7 370's, even though the latter shines much brighter in terms of Memory Performance.

The Radeon R7 370 having better Memory Performance basically means that as the resolution increases, performance decreases at a lesser rate than the GeForce GTX 950, although this doesn't necessarily mean the Radeon R7 370 is faster at 4K.

GeForce GTX 660 VS GeForce GTX 295

 

We couldn't exclude this example. Obviously 2009's GeForce GTX 295 has an ageing architecture and doesn't benefit from all the driver updates that the GeForce GTX 660 still does, but it's clear the GeForce GTX 295's 50% superior Memory Bandwidth proves useless in most of today's gaming scenarios because of its weak Shader Performance. Let's take a look at the Resolution Performance Scores:

GeForce GTX 660 VS GeForce GTX 295

Notice how the GeForce GTX 660's scores decrease at a higher rate than the GeForce GTX 295's do, although they both offer similar performance at 2560x1440. However, the latter is a resolution to which neither of the graphics cards are suited - both should only be used, at most, for 1080p gaming and in that case, the GeForce GTX 660 clearly wins. As the resolution decreases, GeForce GTX 660's winning margins widens and at 720p they both should provide Ultra visuals easily.

We are also developing two other Meters: Operating Temperature and Fan Noise Meters. Whereas the Operating Temperature Meter will give you an idea of how cool your graphics card runs compared to others, the Fan Noise Meter will let you know just how silent your graphics cards is. We'll keep you posted when those new tools are ready. 

Excited about it? What do you think of the Memory and Shader Performance Meters? Does your graphics card have better Memory or Shader Performance? Let us know below!

Does Your Graphics Card Have Better Memory or Shader Performance?

Login or Register to join the debate

Rep
4
Offline
16:31 Oct-25-2015

please add the same kind of feature for cpu's! that would be awesome

0
Rep
51
Offline
10:08 Oct-24-2015

MIne shader is 49% and memory 47% :D So mine is sameee

0
Rep
212
Offline
admin approved badge
09:28 Oct-23-2015

Mine is 63% Shader and 50% Memory. Not bad :D

0
Rep
14
Offline
04:14 Oct-23-2015

You guys are amazing for doing this :D please extend this to your CPUs as well!

1
Rep
58
Offline
admin approved badge
20:49 Oct-22-2015

60% 67% respectively (not counting my OC) Seems really balanced for a 3 year old VGA card!

0
Rep
74
Offline
10:26 Oct-22-2015

Well, I'm quite content with my 280x, never let me down so far, my friend has got a 970 and my card outperforms some titles, whilst his does better on some other.

-4
Rep
354
Offline
admin approved badge
16:28 Oct-22-2015

And his gtx970 is plugged into toaster? :-D

1
Rep
1,471
Offline
admin approved badge
17:03 Oct-22-2015

I am sorry buddy but I can't think of any game in which Radeon R9 280X would beat GeForce GTX 970. Care to give an example?

1
Rep
262
Offline
admin approved badge
17:42 Oct-22-2015

It might be possible that his friend's CPU holds the 970 back significantly.

1
Rep
23
Offline
18:28 Oct-22-2015

Maybe you overclocked you're 280x..
280x is slightly better than 960

0
Rep
1,471
Offline
admin approved badge
10:20 Oct-22-2015

Fun Fact: The "Its Equivalent" Choice in the poll being the most voted option so far means most of our dear GD Members (or at least those that voted) have a High-End Graphics Card. Something above GeForce GXT 770, like GeForce GTX 780, Radeon R9 290X, Radeon R9 390, GeForce GTX 970, etc.

1
Rep
95
Offline
11:32 Oct-22-2015

Pip, can I get your fearless forecast for the next generation(s) of GPUs? Would Nvidia/AMD be working more towards making 4K mainstream (memory) or further improving 1080p gaming (shader)? I would rather it be the latter; but technology like HBM seems to point to the former. Or maybe just a good balance of both?

0
Rep
1,471
Offline
admin approved badge
11:46 Oct-22-2015

I think perhaps AMD is aiming for the 3K, 4K Market. That's why they are focusing on HBM Graphics Cards.
I would say most of the gamers out there game at 1080p and bellow and currently NVIDIA dominates that market. Their GPUs are very efficient and offer good shader performance. Of course some of them are bottlenecked by the memory. For example GeForce 840M could pull out 900p gaming, were its bus-width of 128-bit, instead of 64-bit.
AMD needs to move on to a newer and more powerful architecture. Perhaps a smaller manufacturing process will allow that. Currently their graphics cards offer an enormous amount of shaders which are not really all that powerful or efficient. The problem lies there.

1
Rep
-17
Offline
02:23 Oct-23-2015

Does any of your graphic cards have future proof? I hate the fact that I switch graphic cards every 2 years just because of lower fps. Since 4k display is already been develop. What graphic cards should I bought? Single or SLI/Crossfire?

0
Rep
9
Offline
16:19 Oct-22-2015

But mine is not high-end, but still equivalent :) is that means, that it WAS high-end when it came out?

0
Rep
1,471
Offline
admin approved badge
17:04 Oct-22-2015

No, yours is just balanced lol

1
Rep
47
Offline
09:33 Oct-22-2015

10% both.Funny how I can amp up the res to 1600x900(native res) from 720p while only losing 10 fps.Its runs at 35.While on witcher 3 I lose like 5 fps after amping it to 900p.

0
Rep
31
Offline
09:23 Oct-22-2015

48% Shader and 62% Memory

0
Rep
272
Offline
admin approved badge
06:57 Oct-22-2015

says 100% mem 100% shader. Happy with that, lol :D

0
Rep
38
Offline
04:59 Oct-22-2015

17% Shader and 9% memory :)

0
Rep
-25
Offline
01:58 Oct-22-2015

69% Shader 83% Memory

0
Rep
20
Offline
23:11 Oct-21-2015

100% shader, 100% memory.

0
Rep
95
Offline
admin approved badge
22:31 Oct-21-2015

"Having a very large frame buffer or a very large memory channel is useless if the GPU is not up to the task" That's the smartest thing I've ever heard. In terms of gaming, 12gb vram on the Titan is absurd and even the 8gb on the 390 is too much (aside from Shadow of Mordor i suppose)

2
Rep
272
Offline
admin approved badge
06:59 Oct-22-2015

I've personally run out of VRAM working with Photoshop (with other programs open) on a 3GB GTX 780, and I won't even talk about my 3D scenes, so if not for gaming - I'll take a load of memory just for work!

1
Rep
95
Offline
admin approved badge
21:03 Oct-22-2015

Absolutely, if it comes down to work, then more VRAM is certainly welcome! But in terms of just gaming not much is needed yet :)

1
Rep
6
Offline
21:42 Oct-21-2015

Nice new addition!

0
Rep
48
Offline
20:23 Oct-21-2015

Great article, well written, easy to understand and very useful. Keep up the good work GD!

0
Rep
15
Offline
19:37 Oct-21-2015

96% shader, 100% memory. Too bad tessellation in GCN 1.0 is awful, The Witcher 3 is killing my rig.

0
Rep
3
Offline
19:05 Oct-21-2015

51% and 50% i have

0
Rep
3
Offline
18:39 Oct-21-2015

88% on Shader and 52% on Memory here.
Why don't you guys include a after-overclock meter?
My 52% on Memory will be around 60% if a percentage including the overclocks is implemented.

0
Rep
60
Offline
21:37 Oct-21-2015

I think that won t be possible since they would have to know your exact core clock speed and memory clock speed

0
Rep
95
Offline
00:22 Oct-22-2015

Well GD cld always allow users to specify the overclock separately for clock/mem.
There is obvioisly still work to be done though coz even the current general overclock feature is not working (it overestimates the benefit of the overclock)

0
Rep
3
Offline
11:25 Oct-22-2015

It would be like this- The user feeds it's overclocks to GD. GD has a pre-programmed calculator to calculate approximated performance increase in percentage.

0
Rep
95
Offline
18:26 Oct-21-2015

This is pretty neat. Kudos to GD!
My GPU is at 91% shader and 51% memory.

0

Can They Run... |

| 60FPS, Ultra, 4k
Core i7-11700K 8-Core 3.6GHz GeForce RTX 3090 32GB
100% Yes [1 votes]
935
| 60FPS, Ultra, 4k
Core i7-11700K 8-Core 3.6GHz GeForce RTX 3090 32GB
100% Yes [1 votes]
935
Core i5-11600K 6-Core 3.9GHz GeForce RTX 3070 Zotac Gaming Twin Edge OC 8GB 32GB
Core i7-4790 4-Core 3.6GHz GeForce GTX 1050 Ti Palit StormX 4GB 16GB
| 60FPS, High, 1080p
Ryzen R5 1600 Radeon RX 570 4GB 16GB
| 60FPS, High, 720p
Core i5-4590 3.3GHz GeForce GTX 1050 Ti Asus Pheonix 4GB 10GB
| 60FPS, Low, 1080p
Core i5-9400F 6-Core 2.9GHz GeForce GTX 1050 Ti MSI OC 4GB 16GB
100% Yes [2 votes]
| 1080p
Ryzen 5 3600 6-Core 3.6GHz GeForce GTX 1080 Ti 16GB
100% Yes [3 votes]
| 60FPS, High, 4k
Ryzen 9 3900XT 12-Core 3.8GHz GeForce RTX 2080 MSI Gaming Trio 8GB 32GB
100% Yes [4 votes]
| 60FPS, 1080p
Ryzen 7 3700X 8-Core 3.6GHz Radeon RX 570 MSI Armor 4GB 16GB
100% Yes [1 votes]
| 60FPS, Medium, 1080p
Ryzen 7 3750H 4-Core 2.3 GHz GeForce RTX 2060 Mobile 16GB
50% Yes [2 votes]
| 60FPS, Medium, 1080p
Core i5-2450M 2.5GHz GeForce GT 635M 6GB
0% No [1 votes]
| 60FPS, Ultra, 1080p
Ryzen R7 1700 GeForce GTX 1080 MSI Sea Hawk 8GB Edition 20GB
100% Yes [1 votes]
| 60FPS, Ultra, 1440p
Ryzen 5 5600X 6-Core 3.7GHz GeForce RTX 2070 Super Asus ROG Strix 8GB 32GB
100% Yes [3 votes]
| 60FPS, Ultra, 4k
Ryzen 3 3100 4-Core 3.6GHz GeForce GTX 1050 Ti 4GB 16GB
0% No [2 votes]
| 60FPS, Medium, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce GTX 1660 Super MSI Gaming 6GB 16GB
100% Yes [2 votes]
| 60FPS, High, 1440p
Ryzen 9 3900X 12-Core 3.8GHz GeForce RTX 2070 Asus Dual OC 8GB 32GB
| 60FPS, Ultra, 4k
Core i7-5930K 6-Core 3.5GHz GeForce GTX 1080 Ti ASUS ROG STRIX 11GB 16GB
100% Yes [2 votes]
| 60FPS, Medium, 1080p
Ryzen R5 1600 AF GeForce GTX 1050 Ti Asus ROG Strix Gaming OC 4GB 8GB
0% No [1 votes]
| 60FPS, Medium, 1080p
Core i7-3770 4-Core 3.4GHz GeForce GTX 960 2GB 8GB
0% No [1 votes]