This article will provide our lovely members with a basic knowledge about Graphics Cards. As you may have seen, we have recently launched a Graphics Card Comparison Feature that is accessible around the site.

The following article will focus on the main aspects of Graphics Cards and will endow you with enough knowledge to help you make the best choice when buying a new graphics card. It will be divided into various parts, this first one will focus on Shader Processing Units...

Graphics Card Technology Part 1: Shader Processing Units

Before getting into the juicy bits, let’s start by clearing one thing up. It’s a mistake we all make and that I myself have made: the difference between GPU and GFX.

GFX VS GPU

GFX stands for Graphics Card. A Graphics Card/Video Card is GeForce GTX 680, Radeon HD 7970.

GPU stands for Graphics Processing Unit. The GPU is the processor inside/embedded in the Graphics Card. GeForce GTX 680's GPU is Kepler GK104 and Radeon HD 7970's is Tahiti XT.

I am better off showing than describing and so, take a look for yourself:

 

 Now that's cleared up, let's move on to the most important aspects of a Graphics Card.

Shader Processing Unit

What is a Shader Processing Unit? Also known as Stream Processor (AMD), CUDA Core (NVIDIA), the Shader Processing Unit became the most important component on a Graphics Card,  upon the release of Shader Unified Architectures, back in 2006.

Before the release of Unified Shaders, GPUs had 2 types of Shaders: Pixel Shaders and Vertex Shaders, whereas Pixel Shaders were in charge of computing color and Vertex Shaders allowed the control of movement, lightning, position, and color in any scene involving 3D models.

Both are important and GPUs always had a major problem: an unbalance between Pixel and Vertex Shaders. Example: Radeon X1900 - 36 Pixel Shaders and 8 Vertex Shaders. The problem? Depending on the need of 3D application, often Pixel or Vertex would be idling. Pretty much like a computer waiting for its printer to finish a print order. Another term would be: bottleneck.

Well, unified Shaders will switch between the two depending upon what work needs to be done. In other words, you don't have one or the other, ideally, sitting idle waiting for the other group to finish their work and thus allows higher efficiency.

Now, what you want to know is how to compare Graphics Cards - especially from different manufacturers. AMD and NVIDIA have always had different Shader Processing Units (SPUs) designs which make it harder to compare Graphics Cards directly. AMD's cards have a higher count of SPUs but are less complex. On the other hand, NVIDIA has a lower count of SPUs but they are much more complex/powerful.

A good way to help comparing the SPUs of the 2 manufacturers, is to attribute them a "power".

Until the release of the Kepler Architecture, NVIDIA's Shaders have always been higher clocked, when compared to the central unit of the GPU.

The First Shader Unified Architecture, used on the GeForce 8000/9000 Series, had the Shaders Clocked up to 2.5 times (2.5X) as fast as the central unit. That's why you see that the central unit of GeForce 8800 GT is 600MHz but its Shader/Processor Clock is 1500MHz. This allowed tremendous power but caused enormous heat.

With the release of the Fermi architecture, this changed. NVIDIA developed a technique known as Hot Clocking: the Shaders were clocked twice as fast as the central unit - 2X. This meant less heating and less power consumption.

However, the big bang was with the Kepler architecture.

On the Kepler architecture, the Shaders were clocked at the same speed of the central unit and the concept "Shader/Processor clock" became obsolete. Adding to that, they were up to thrice as energy efficient as Fermi Shaders.

Now, you must be wondering: if the speed of the Shaders has been reduced over the years, how come graphics cards are more powerful now?

The thing is, each architecture has allowed a higher SPU count. Less powerful but more of them. Add to that the smaller technology (which I will go over in part 2), allows higher clock-frequencies.

Here is a simple way to calculate raw shading power:

GeForce 8800 GT:

128 SPUs, 2.5X Power, 600MHz Core-Clock. Raw Power: 128 X 2.5 X 600 = 192000

GeForce GTX 285:

240 SPUs, 2.3X Power, 648MHz Core-Clock. Raw Power: 240 x 2.3 x 648 = 357696

GeForce GTX 580:

512 SPUs, 2.0X Power, 772MHz Core-Clock. Raw Power: 580 x 2.0 x 772 = 895520

GeForce GTX 680:

1536 SPUs, 1.0X Power, 1006MHz Core-Clock. Raw Power: 1536 x 1.0 x 1006 = 1545216 + Boost Clock = up to 1625856

GeForce GTX Titan:

2688 SPUs, 1.0X Power, 836.5MHz Core-Clock. Raw Power: 2688 x 1.0 x 836.5 = 2248512 + Boost Clock = 2353344 + Max Clock (e.g.: 1GHz, as it depends on Temperature & Voltage) =  up to 2688000

Things are much more simple with AMD. "Shader Power" like I have called it before, has only changed with the Graphics Core Next (GCN) architecture.

Radeon HD 2900 PRO:

320 SPUs, 0.65X Power, 600MHz Core-Clock. Raw Power: 320 X 0.65 X 600 = 124800

Radeon HD 4890:

800 SPUs, 0.65X Power, 850MHz Core-Clock. Raw Power: 800 X 0.65 X 850 = 442000

Radeon HD 6970:

1536 SPUs, 0.65X Power, 880MHz Core-Clock. Raw Power: 1536 X 0.65 X 880 = 878592

Radeon HD 7970:

2048 SPUs, 1X Power, 925MHz Core-Clock. Raw Power: 2048 X 1 X 925 = 1894400

Radeon HD 7970 GHz Edition:

2048 SPUs, 1X Power, 1000MHz Core-Clock. Raw Power: 2048 X 1 X 1000 = 2048000 + Boost Clock = up to 2150400

 

 

For both cases, Open GL and Shader optimizations provided by certified drivers for each architecture can unlock GFX potential even further.

Which company does it better?

In my opinion, until the release of the Kepler architecture, AMD had been allocating more SPUs with fewer transistor counts compared to NVIDIA's Graphics Cards (e.g GTX 580 Vs 6970). Which means it was more efficient in allocating SPUs per transistors.

Having a lower count of SPUs and having them higher clocked also produced unecessary heat and increased power consumption.

Therefore, I vote that AMD did it better. However, with the release of Kepler Graphics Cards, things have changed. NVIDIA's Graphics Cards now heat as much as AMD's GFX and consume far less power, in a way a dual-graphics GeForce GTX 690 consumes as much as AMD's Radeon HD 7970 GHz Edition.

In the end, how can you compare a NVIDIA SPU to an AMD SPU? You can't.

Only by making simple tech comparisons, which are estimated and not counting driver support. So this formula is pretty accurate between NVIDIA-NVIDIA and AMD-AMD Graphics Cards but might not yield accurate results for NVIDIA-AMD cards.

Also, its important to note that these raw values don't mean a whole lot by themselves, which is why we have a new GFX comparison area that helps you compare one GPU to another across a broad range of criteria. A card has other components that are important as well, which I will go over in the following articles.

In Graphics Card Technology Part 2 I will be focusing on the manufacturing technology and how it has changed over the years. Plus Texture Mapping Units and Render Output Units and how they have and still do affect a Graphics Card's performance.

In Graphics Card Technology Part 3, I will be taking a look at the memory bandwidth and memory size - one of today's most overrated components - especially in low-end cards.