RX 6500 XT might be slower than a GPU 5 years ago at the same price

Written by Stuart Thomas on Fri, Jan 7, 2022 3:20 PM

The ongoing COVID-19 pandemic has not been great for PC hardware enthusiasts, as the global chip shortage puts strain on availability, which then gets quickly snapped up by scalpers or miners, leaving very little to actual gamers. With the reveal of AMD’s new entry-level GPU though, the specs aren’t looking too great compared to another GPU 5 years ago at the same price.

RX 6500 XT is launching on January 19th for $199. 5 years ago the RX 480 also launched at $199, and as some users online have pointed out actually has better specs despite the 6500 XT being a much newer card at the same price. Of course, no proper comparisons can be made until we get to test the new card ourselves, but on paper it’s not looking too good…

As Reddit user u/valhalao pointed out, the RX 480 launched in 2016 with 5.83 TFLOPS of raw compute power, whereas the newer RX 6500 XT only has 5.77 TFLOPS. It also lacks some encoding features that the 5-year old GPU had. But again, this is not completely representative of actual performance until we see some third-party benchmarks.

cdddb8ef-2e1d-47a1-a395-a7dbd412a855.jpg

The RX 480 even has higher memory bandwidth, more shader cores and texture units. It is possible that the more efficient RDNA2 cores and the 16MB of Infinity Cache can make up for the difference though, improving the bandwidth as AMD said the Infinity Cache helps out more at 1080p resolutions, which is what entry-level cards are targeted for. Plus, the 6500 XT has a pretty impressive 2.6GHz clock speed, much higher than the RX 480.

One silver lining in all this though could be more availability for gamers. AMD has pressed for a ‘gamer-first’ approach, and as such seems to be delivering a GPU that is not powerful enough for mining but still good enough to play modern games:

We have really optimized this one to be gaming-first at that target market. And you can see that with the way that we configured the part,” said the CVP of Radeon Graphics at AMD, Laura Smith. “Even with the four gigs of frame buffer. That’s a really nice frame buffer size for the majority of AAA games, but it's not particularly attractive if you're doing blockchain type activities, or mining activities.

And so we've tried to make some real gamer-first transitions for the things that we don’t control but we have influence over to optimize that card to be as accessible as possible to that use of gamers.

What do you think? Is the RX 6500 XT a good deal disregarding inflated prices? Do you think we’ll see better availability? Or will miners still want to buy a card like this? And how do you feel about the specs compared to the old RX 480? Let us know your thoughts!

Do you think the RX 6500 XT will perform better or worse than the RX 480?

Do you think there will be better availability for the RX 6500 XT?

Login or Register to join the debate

Rep
16
Offline
12:40 Jan-11-2022

I may change my 1080 Ti, and 2 1080 Ti as backup somewhere in the closet, for the new 6500 XT..........

0
Rep
12
Offline
15:55 Jan-09-2022

nVidia : I'm gonna sell old 2060 but with 12 GB vram. No better performance but much higher price. Who can do a better joke ?


AMD with a 6500 XT in his hand : Hold my beer !

1
Rep
191
Offline
junior admin badge
23:01 Jan-12-2022

Nice burn


0
Rep
97
Offline
admin approved badge
03:08 Jan-09-2022

Suddenly, my RX 580 doesn't feel so old.

6
Rep
1
Online
08:48 Jan-09-2022

It does, because it can't run the latest most demanding games well even at 1080p.


The RX6500XT will perform about 25-30% faster than a rx580 though.

0
Rep
272
Offline
admin approved badge
15:35 Jan-09-2022

Way to crap on someone's parade...

5
Rep
1
Online
17:51 Jan-09-2022

I didn't think about that. :/


As a person with a gtx 1060 and RX570 and I'm technically crapping my self too, but it's just how it is, I'm struggling here as well and know from experience.

0
Rep
97
Offline
admin approved badge
20:37 Jan-09-2022

rattlehead999 If I'm able to run Halo Infinite at 1080p at over 30 FPS out in the open, then I'm fine with it


And I can

0
Rep
1
Offline
11:31 Jan-11-2022

That's doubtful as it was quoted to be slower then a Rx480, the Rx580 is 10% faster then a Rx480, but then again architecture may play role in the differences in performance

0
Rep
1
Online
12:37 Jan-11-2022

Quoted from whom?


The rx 6500xt is faster than the rx 5500xt which is faster than the rx 580. Unless the bandwidth chokes it.

0
Rep
191
Offline
junior admin badge
23:02 Jan-12-2022

We shall see if the benchmarks reflect your optimism for the 6500XT

0
Rep
191
Offline
junior admin badge
23:03 Jan-12-2022

As long as you take care of the 580 she'll take care of you.

0
Rep
97
Offline
admin approved badge
23:21 Jan-12-2022

I'm taking good care of the 580

1
Rep
191
Offline
junior admin badge
23:43 Jan-12-2022

Happy to hear. May it last you a long time.


1
Rep
1
Online
11:49 Jan-13-2022

I undervolted both my gtx 1060 and rx 570 and now the gtx 1060 consumes 80W and the rx 570 consumes 95W both in Furmark and both don't go over 55C, lost 1-3% performance on both.

0
Rep
97
Offline
admin approved badge
21:07 Jan-13-2022

I'm honestly not comfortable undervolting.

0
Rep
57
Offline
23:45 Jan-08-2022

this gpu is so bad that it actually might be good. In current market when you have to pay like 700 for rtx 3060, 200 for rx 580 powered gpu might be okay, but everything about this gpu is bottom of the barrel, like vomit inducing bad, 4x pcie 4.0 interface, 4gb of 64bit vram, no av1 decoder, no any encoders (no relive) 5 year old gpu performance, and all that for 200 msrp....

1
Rep
272
Offline
admin approved badge
15:57 Jan-08-2022

No AV1 decode on the new card? What the...?

5
Rep
191
Offline
junior admin badge
01:26 Jan-08-2022

While I detest having 2 graphics adapters (a dedicated GPU and an iGPU - because games sometimes having issues using the dedicated GPU instead of the iGPU), right now if you're a newcomer to PC gaming and don't have a ton of cash, you're better off buying an decent APU (CPU with a GPU) for 30 - 45FPS 1080p gaming and build a system around that, then to spend $200 on something that in my mind should cost $100 - $120 max.


Sorry for the longwinded post, but this whole situation is quite annoying.


Intel you better not make your GPU's expensive or get blasted for garbage drivers.

5
Rep
25
Offline
admin approved badge
09:27 Jan-08-2022

It is annoying! and we can't do anything about it (like boycott) because of miners.

1
Rep
191
Offline
junior admin badge
20:26 Jan-08-2022

While miners are responsible for a portion of the issue, factory closures, supply chain disruptions, increased chip usage in basically everything have basically decimated chip availability for GPUs.

1
Rep
1
Online
09:45 Jan-08-2022

GDDR6 is the reason why the MSRP is high for both AMD and Nvidia, though AMD could have made the rx 6500xt a 175$ MSRP GPU and Nvidia a 230$ MSRP GPU.


1GB of 16gbps GDDR6 costs 15$, the RX 6500XT has 18gbps which is new and for sure more expensive than that, so the VRAM on the RX 6500XT costs at the very least 60$


Compare it to the RX 580 4GB GDDR5 which costed 5.5$ per GB and that's just 22$ for the VRAM. Then add-in inflation.


And GDDR6 used to cost 13$ per GB in 2019 for 16gbps and 11.7$ for 14gbps, that's why the RX 5500XT could have a bigger die, the same 4GB of VRAM and cost 200$, plus again inflation.


Though AMD in 2019 doubled their profit margins with the Release of Zen2 and RDNA1 then further increased them with the release of Zen3 and RDNA2, while Nvidia has has sky-high profit margins since and including(and especially) the gtx 1000 series.

2
Rep
191
Offline
junior admin badge
20:42 Jan-08-2022

Inflation is a B i A t C h, but sadly it's something we as individuals can't control. As for the rest we'll just have to roll with the punches and be very smart when it comes to buying hardware. Ideally we should have a cut off point where you just don't but something that goes past a certain price point (but that is next to impossible to implement on a global level sadly).

0
Rep
1
Online
08:43 Jan-09-2022

The cut-off point is 20$ above MSRP, unless it's a special 3rd party version that has a high ASIC score or some crazy good cooling.


In finance there is a saying that you should never change your expectations and standards during an abnormal market and I plan to follow that rule, even if many don't. If all people followed that rule, well these GPUs would be selling at or very close to MSRP

1
Rep
191
Offline
junior admin badge
11:50 Jan-09-2022

When it comes to finances, your stance is very disciplined and I share it. I salute you.

0
Rep
272
Offline
admin approved badge
15:40 Jan-10-2022

That's the thing, for all the crap I get for my own GPU choices - I've never paid above MSRP for any of them. I just flat-out refuse to pay above MSRP - and that includes partner cards (some cheap-looking RGB plastic for £50-400 extra over MSRP doesn't make a card any faster, lol).


Unfortunately, it's not really the likes of MSRP-focused people who set the prices...

1
Rep
57
Offline
23:49 Jan-08-2022

i know its been very long time, but from my personal experience on intel integrated gpus its terrible experience. These iGPU's usually are extremely memory bandwidth starved and performance is very inconsistent between games and resolutions. I would consider any iGPU only as last resort, i would rather get some ancient nvidia quadro or old gtx than iGPU

0
Rep
1
Online
08:47 Jan-09-2022

That was long ago, in the last 3 years Intel's drivers have improved a lot, but yes, their iGPUs are not meant at all for gaming they are for quick-sync or whatever the acceleration of certain CPU tasks was called by intel and as a basic graphics adapter. The XE graphics were the first gaming integrated graphics cards intel released.


Otherwise I want an 8 core/16 thread Zen4 with 16-20CUs of RDNA2/3 at 2.4-2.5Ghz on 5nm at 125-130W TDP.


The 6800U already has 12CUs at 2.2-2.4Ghz at 7nm++(6nm) and 8 cores/16 threads Zen3 at 25W, and I think that even for desktops at 7nm++(6nm) they can make a 16CU RDNA2 at 2.2-2.4Ghz with a 6-8 core Zen3 under 130W TDP.

0
Rep
191
Offline
junior admin badge
11:57 Jan-09-2022

While the bandwidth limit is a concern with old APU's (mainly due to slow RAM), with DDR5 you'll have RAM that is 5000Mhz or faster. At that speed it comes down to how optimized the API's and scheduler are to take advantage of the available resources.


Basically AM5 high end APU's, FSR + DDR5 RAM = acceptable 1080p gaming


Here's a link below that goes into it a little more.


https://www.rockpapershotgun.com/ces-2022-amd-confirms-ryzen-7000-cpus-radeon-rx-6500-xt-graphics-card-and-more

0
Rep
1
Online
12:12 Jan-09-2022

The L3 Cache in the iGPU(infinity cache) is also a huge factor, as it almost doubles the effective bandwidth according to AMD for the RX 6500XT, so it should do that to the 102GB/s dual channel DDR5 6400Mhz.


For APUs I'd say 6400Mhz DDR5 in dual channel is the minimum recommended.


The Valve Steam Deck already seems to be doing very well with 8CUs at only 1.0-1.6Ghz, so I can only imagine 12CUs at 2.4Ghz would be excellent for 1080p gaming with FSR, delta color compression and above average frequency dual channel DDR5 and I can see myself getting an APU in the future, but I'm not buying a PC this year as I'm not going above 1080p any time soon.

1
Rep
1
Offline
11:40 Jan-11-2022

5700G it is then

0
Rep
191
Offline
junior admin badge
23:06 Jan-12-2022

Umm...


Do you mean for yourself, cause if your specs are legit I don't think you need the 5700G.

0
Rep
76
Offline
admin approved badge
23:36 Jan-07-2022

Yeah, comparing them like that does indeed paint bad picture, less features, "lower performance", plus it is limited to 4x PCI-e, which is fine for PCI-e 4.0.... I wonder if that one will sting for owners of PCI-e 3.0 motherboards. Almost like AMD is trying to tell you you are too poor for good stuff... :-D Anyway, unless nVidia does the same gimping, they might be better choice. But then again, right now, in market where you buy what you can and not what you want, this might end up selling well. Even if I would say it in a way feels like AMD is taking a piss on everyone not wanting to buy their 300USD+ offering.


But i would also stress one thing, TFLOPS are pretty bad way of comparing graphic cards, because they really don't scale well. Scaling gets especially bad with gaming. Plus it just isn't apples to apples once you are talking about two different architectures. So I wouldn't take TFLOPS as ultimate indicator of performance between two. That being said, looking at rest of the specs, unless reviews surprise with amazing RX6500XT performance, I would take RX480 8GB over RX6500XT anytime. If there would be a choice.

1
Rep
-6
Offline
21:33 Jan-07-2022

"better availability than" what? rx 480 or this year gpus. It might be better than rx480 since the technology and the process node is different, tflops can't really tell the exact performance of the card, but the availability most likely will be lower than rx480 even with 6nm allocation

3
Rep
74
Offline
19:18 Jan-07-2022

Comparing different generation GPUs does not paint a nice picture. It's known that if a programming messiah would come and write an API which is truly optimal and extra fast, you could play 4k games on an entry level card. Most cards do have raw performance to run anything, it's just that the APIs are inefficient. Vulkan made some great strides, which can be seen nicely in Doom games.


Hence, bandwidth does not play a role when it comes to raw numbers, but the communication paths the GPU has to take to render the colours on the pixels on the screen. Same with GPU clocks. Still, this is all just pure theory and nobody has really taken the time to really program or optimize. You can see that in the following:


1) GPUs becoming stronger in raw stats. Higher clocks, bandwidth...


2) Games not being optimized (from Fallout 4 all the way to Cyberpunk). Games which look worse and run worse than some other years old games.


3) No real strides in API developement. DX12 at one point promised that games would be able to use the integrated GPU and the discrete at the same time. Of course, that was scrapped given how it's a massive undertaking.


So all in all, the microelectronics is reaching that upper ceiling when it comes to raw power/voltage/clocks/temperatures and the game devs are spending less on optimization, and more on marketing.

9
Rep
191
Offline
junior admin badge
01:28 Jan-08-2022

It was a pleasure reading your post. I take my hat off to you sir.


🎩

2
Rep
57
Offline
23:57 Jan-08-2022

well i always had a pet peeve when people compare doom 2016 and eternal games to something as "extremely well optimized" its not as simple, because these doom games are comparably simple from level design and environment perspective. It is still basically a glorified corridor shooter. Game engines and games styles (like open world RPG's or arcade racing games, simulators etc) has tremendous importance on system requirements and API is only one side of the story here.

3
Rep
74
Offline
00:02 Jan-10-2022

Reading comprehension 101. I'm not talking about the FPS itself, but the uplift DX12 -> Vulkan. Try to read with a bit more thought next time. Furthermore, simple level design or not, graphical effects are what dictates how much the clock speeds matter. Memory in itself is mostly concerned with loading/unloading, hence why RPGs usually are more memory intensive, given how there are usually separate cells which have their own scripts. But Doom not having the complexity while still looking gorgeous has nothing to do with FPS itself. The DIFFERENCE between DX12 FPS and Vulkan, while looking the same is what I'm talking about...

1
Rep
57
Offline
14:40 Jan-10-2022

Depending on gpu architecture vulkan vs openGL in doom 2016 doesnt even make a big difference ranging from 5-15% on nvidia cards. Sure on amd gpus it's a significant improvement, but thats because amd was always dogs#it in openGL performance not because vulkan is that more efficient in the end product. Now doom eternal is written only in vulkan so its impossible to compare dx12 vs vulkan in that game, but in other games usually there is no significant difference between them. ( i personally regard as >15% of performance improvement as insignificant) So your vulkan API in doom games point kinda falls flat in this case.

0
Rep
74
Offline
00:30 Jan-11-2022

Are you dense or sth? Where did I write that Vulkan is some godgiven API? I've even said that no real strides were made in API development. And even your 10-15% is an argument for a development of a better generalized API. The point is not that "AMD bad" or "Nvidia bad" at anything, you can't possibly generalize that one architecture is better than the other in OpenGL, something that has not been seen in games for ages (more serious ones at least). AMD being worse in OpenGL is mostly a Windows thing, since on Linux, there is no real difference.


Vulkan making great strides (which it did) in API development does not mean "VULKAN DA BEST". Innovating =/= #1 Best.


So let's recap, my point is: APIs are pure dog!^%# if we take raw theoretical performance of GPUs into consideration. Vulkan, DX12, OpenGL.... Even low end GPUs being able to dish out so many computations, running at such high clock speeds (try to imagine transistors switching on/off 3000000000 times per seconds, or 3GHz)


So again, please read carefully and don't get hung up on a single word and/or sentence.


EDIT: When I wrote DX12 -> Vulkan, I meant from DX12 to Vulkan uplift. As long as it's not in a margin of error improvement (1-3%), it's an uplift. Not a major one, but an uplift nonetheless, which shows that something as "simple" as an API change can have an impact, which is of course logical, since APIs are literally the code that communicates. So you should do something about that "pet peeve" because you lose the ability to comprehend once you're triggered.

0
Rep
57
Offline
00:50 Jan-11-2022

It seems that you overstated and misunderstood my Nvidia & amd performance in openGl/vulkan statement. But there is no use discussing with an condescending arrogant knucklehead like you. Hard to answer politely? No? you can start today, but im done here....

0
Rep
1
Online
15:22 Jan-07-2022

RDNA1 and RDNA2 have 35-40% better gaming performance per TFLOP compared to GCN 5.0(Polaris), keep in mind the RX 5500XT has 5.1TFlops and is slightly faster than the rx 590 and the rx 590 has 7.119 Tflops.


That means that the RX 5500XT gets 0-5% better performance than the rx 590 with 37% fewer Flops.

11
Rep
1
Online
15:35 Jan-07-2022

I am however worried about the low amount of bandwidth, because with only 16MB of L3 cache, it's really hard to imagine that's enough to compensate for the lack of bandwidth, but who knows, maybe it does, because there are fewer SP, TMUs and ROPs and it's effectiveness scales rather than being overall capacity that matters.

1

Can They Run... |

Ryzen 5 3500U 4-Core 2.1 GHz Radeon RX Vega 8 8GB
| 30FPS, Medium, 720p
Ryzen 5 3500U 4-Core 2.1 GHz Radeon RX Vega 8 10GB
| 30FPS, High, 1080p
Core i3-8100 4-Core 3.6GHz GeForce GTX 1060 3GB 16GB
100% Yes [2 votes]
| 60FPS, Ultra, 1080p
Ryzen 7 5800H 8-Core 3.2GHz GeForce RTX 3060 Mobile 32GB
100% Yes [2 votes]
| 60FPS, High, 1080p
Ryzen 7 5800H 8-Core 3.2GHz GeForce RTX 3060 Mobile 16GB
100% Yes [1 votes]
| 60FPS, High, 1080p
Core i7-4770K 4-Core 3.5GHz GeForce GTX 980 4GB 32GB
100% Yes [3 votes]
| 60FPS, High, 1080p
Ryzen 7 5800H 8-Core 3.2GHz GeForce RTX 3060 Mobile 16GB
100% Yes [2 votes]
| 60FPS, Ultra, 1080p
Core i5-10400F 6-Core 2.90GHz GeForce RTX 3060 Ti MSI Ventus 2X 8GB 16GB
100% Yes [5 votes]
| 60FPS, Medium, 1080p
Core i7-10700F 8-Core 2.9GHz GeForce GTX 970 Gigabyte G1 Gaming 4GB Edition 16GB
100% Yes [2 votes]
| 60FPS, High, 1080p
Core i5-11400F 6-Core 2.6GHz GeForce GTX 1650 Super 4GB 16GB