Up For Debate - Is ray tracing bad for PC gaming?

Written by Jon Sutton on Sun, Oct 20, 2019 4:08 PM

I must confess I somewhat naively assumed that the capacity for developers to make a game look better was, automatically, a good thing. Detractors of real-time ray tracing technology have taught me the errors of my ways though. Progress is bad. Old is good. Or so say the proverbial sticks-in-the-mud who would perhaps prefer things stay exactly as they are or, heaven forbid, to even crank us into reverse and shift our industry backward. Back toward the primordial goop as two white paddles thwack a white cube across a dotted white line. The good old days, when everything was better and we didn’t have to worry about what a pesky polygon is. Meanwhile, I’m sitting here wondering how great ray-traced Pong would be. 

And so, like much of life, we shuffle off into two camps. Those who adore ray-tracing and the possibilities it brings, and those who abhor it for its demanding ways. 

For those who love it, there’s plenty to get excited about. Ray tracing is a huge leap forward to lighting technology in video games. It allows games to achieve a level of realism which was previously unachievable through traditional rasterization methods. Rasterization has served us well but it lacks in certain areas, particularly reflections, shadows, and ambient occlusion. Rasterization can fake these effects but, as DICE producer David Sirland said, this is “very tricky to get it to play right”. Ray-tracing simplifies lighting for developers whilst simultaneously looking far better. It’s a win all round for game development.

A growing number of AAA titles are supporting ray-tracing and support is also beginning to trickle down to smaller games. Control looks absolutely stunning; Cyberpunk 2077 should be a revelation; Quake II RTX breathes new life into one of the most beloved first-person shooters of all time. And this is just the start. Nvidia has opened up its own dedicated Lightspeed Studio specifically geared towards remastering classic games with RTX support. Quake II and Minecraft are the first two on the list but we don’t expect it to end.

Sony has also finally confirmed the PlayStation 5 will support hardware-based ray-tracing. Project Scarlett is expected to follow suit. If you want to pinpoint the moment raytracing will blow up, it’s when those consoles arrive and put a (rudimentary) form of real-time ray-tracing in millions of boxes around the world. Then, it’s go time.

But, like anything in this glorious Internet Age, real-time raytracing also has its fair share of detractors who are keen to let everyone know just how much they hate something. The bit that’s difficult to dance around is that ray tracing is immensely demanding, and we’re not even starting on the way towards path tracing. Whichever way you slice it, turning on raytracing features in games is going to dramatically hit your frame rate. Right now, enjoying the feature is prohibitively expensive. You are footing the bill for the bloated die size. That’s the cost of early adoption, naturally, but year by year, raytracing will become more mainstream.

On top of this, Nvidia probably didn’t overly endear itself to potential customers when CEO Jensen Huang said its customers would have to be crazy to buy a graphics card without raytracing support. Talking down to your customer base ( indirectly) never goes well and it probably got folks’ backs up from the get-go.

On a similar topic, Nvidia’s Morgan McGuire also said earlier this year that he expects the first AAA game to require a ray-tracing GPU will launch in 2023, some four years from now. This is probably the single greatest argument against ray-tracing - forced hardware upgrades. With at least four more years to transition though, it’s difficult to envisage this being a genuine problem at that point.

Ordinarily, you’d think this would be a straightforward discussion. Since the inception of gaming we, as gamers, have all been with the shift forward in visuals. We love to see games look better and better, and ray-tracing provides a genuine generational leap the type of which we haven’t seen on a long while. But this time there’s blowback and plenty of detractors who seem committed to opposing raytracing, at least in the here and now.

So what are your thoughts, do you actually think ray-tracing is bad for PC gaming? Or is the ultimate graphical enhancement which justifies the purchase of a GeForce RTX 20 series graphics card? Share your thoughts below!

Is ray tracing bad for PC gaming?

Our favourite comments:

Absolutely in love with it...I wouldn't consider buying a pc without graphic card that supports it and does its job well fps wize, it simply looks too beautiful and that's just in what little games support it now. ^^

Razbojnikov

We are PC Gamer's if it slows our PC down we lower the setting or turn it off, if we have a beefier system we crank up the settings, or we crank it up and then lower the resolution pull the monitor closer and play in window mode. Chill-Out

reddemolisher

Login or Register to join the debate

Rep
14
Offline
admin approved badge
11:18 Oct-22-2019

Nothing wrong with new tech. What is wrong is being forced to use it, at least so soon after developing it. If a developer doesn't want to bother with the old method due to time, effort, etc. at least make the transition gradual. Not sure 5 years is enough.

3
Rep
23
Offline
11:08 Oct-22-2019

Absolutely in love with it...I wouldn't consider buying a pc without graphic card that supports it and does its job well fps wize, it simply looks too beautiful and that's just in what little games support it now. ^^

2
Rep
18
Offline
09:54 Oct-22-2019

We are PC Gamer's if it slows our PC down we lower the setting or turn it off, if we have a beefier system we crank up the settings, or we crank it up and then lower the resolution pull the monitor closer and play in window mode. Chill-Out

3
Rep
386
Offline
admin approved badge
10:01 Oct-22-2019

What happens when ray tracing is no longer an option but a requirement and we can't turn it off as there wouldn't be alternatives?

3
Rep
116
Offline
10:33 Oct-22-2019

If that happens it's very likely that Ray Tracing is widespread enough that even lower end GPUs can run it reliably. Rest assured though, no game developer, definitely not a big one, is soon going to pick up only RT as their primary lighting method as that would significantly decrease their player base. Realtime RT is still technology in it's starting stages and so far it is still reliant on traditional methods to fill in the gaps first gen RT can't run reliably.

3
Rep
95
Offline
11:09 Oct-22-2019

Yeah, these guys want to make money. Or as others put it, are "greedy". So the only way RT is shoved in our throats is if Nvidia pays them enough money to do so.

1
Rep
18
Offline
12:18 Oct-22-2019

i am yet to see an actual game implementation (seen gameplay video comparisions on youtube thats it nothing eles)that makes me go AW $HiT I NEED RTX CARDS now Physx made me go wow but we all know how that went. I know i know RT is in its in

0
Rep
18
Offline
12:21 Oct-22-2019

infancy and what we are seeing is basic tweaks to allow some sort of RT Implementation ONLY Dynamic Shadows Or GLobal Illumination Or Reflections. purely eye candy that made me go meh i'll not notice it 5 mins later and my gaming experience

0
Rep
18
Offline
12:28 Oct-22-2019

hasnt changed at all. Physx did affect it to certain extent when used superbly as it was in Assasins creed Black flag and Batman Arkham series even with metro last light and 2033 but thats about it. Never really used to its true potential

0
Rep
18
Offline
12:31 Oct-22-2019

now sure Realistic Graphics are always gonna blow our mind but at the cost of significant frame rate? while not really looking that different so till we reach something that blows my mind i aint running out to get a RTX card. heck if anyone

0
Rep
18
Offline
12:34 Oct-22-2019

can tell me a game where Physx is really utilized like a game changer id appreciate that as i have the hardware to run it now. and like TheEmperor96 we are a long way from it being a norm and the day we reach that the entire game runs on RT

0
Rep
18
Offline
12:38 Oct-22-2019

that would be a day when we have crazy computing power at our disposal like in 4k 60-90fps high or ultra on a entry level potato machine cause thats the level of computing power needed to run full RT at 1080p 60 fps low-medium

0
Rep
18
Offline
12:40 Oct-22-2019

at the end of the day whats easier to sell Ray Traced Realistic shadows / reflections or 8k 120FPS just look at PS5 ive heard 8k more than ray tracing being said by the Ps5 Architect.

0
Rep
386
Offline
admin approved badge
13:09 Oct-22-2019

8k is just the display output bandwidth, it's not going to be for games or maybe a couple small games

0
Rep
386
Offline
admin approved badge
13:17 Oct-22-2019

See the problem is that low-end GPUs will still have ray tracing cores, which means that instead of more cuda cores there are ray tracing cores. And ray tracing cores are Application Specific Integrate Circuits - ASICs, have very limited usage.


On top of that Ray Tracing requires the GPU to render the entire environment around you and store it into memory all the time for all the lighting, shadows, reflections, refractions, shading and so on to work, so not only is the method more demanding on its own, but it is also a lot more demanding due to rendering a ton more for it to work.

0
Rep
386
Offline
admin approved badge
13:19 Oct-22-2019

And visuals don't make a game ground-breaking, good gameplay, mechanics, level design, story with good amount of content and good quality to it do. Literally Crysis is a great example of that. Crysis 1 was loved not because of its graphics, but because it was genuinly very good, it had a unique and innovative gameplay, great level design, great gunplay, great missions, it was long and in general it was amazing. Both Crysis 2 and especially Crysis 3 had much better graphics but they weren't nearly as good as Crysis 1 so they sold less and they were not even close to Crysis 1 in terms of quality and quantity.

2
Rep
386
Offline
admin approved badge
13:20 Oct-22-2019

If anything at the time people complained that Crysis 1 couldn't run on most PCs, not much different than now when a game doesn't work well on a rx 5700 or rtx 2070 and r7 3700x and i9 9900k people just complain and complain.

2
Rep
18
Offline
12:44 Oct-23-2019

this is a dumb question more like a statement as it pure economics and no one would buy it due to the pricing and utility value. but wont a Standard Graphic Card design MORE CUDA COREs at faster clocks over the larger die as the RTX die is

0
Rep
18
Offline
12:47 Oct-23-2019

Crazy huge and then have a separate card with RT CORES on it and the AI cores as well or maybe skip them cause upscaling really? and then essentially give the ability to Have it run in a DUal Graphic Card mode? and as its only RT CORES even

0
Rep
18
Offline
12:49 Oct-23-2019

more cores and much better performance. heck make the thing backward compatible and let me plug it up with my GTX 1080 or let IT go compatible with AMD CARDS and Hijack their cards like they Hijacked AMD FREESYNC? sure like i said a card li

0
Rep
18
Offline
12:51 Oct-23-2019

like that would be crazy expensive and no one would really buy it due to the little games that support it and its exorbitant price. but then Nvidia could i theory have a GPU that has 4K 60-100 FPS capable Card and a RTX card capable of prov

0
Rep
18
Offline
12:54 Oct-23-2019

providing some killer Visuals!! heck i know many Animation Studios would Dro tons of Money for RAY TRACING ONLY CARDS imagine how much it would lower the cost and time of movie production. Ive seen a SIngle 2K frames render for hours if it

0
Rep
18
Offline
12:56 Oct-23-2019

could bring the time down by even 30% that would still be a huge advantage. Sure this may seem absurd that just for a single Graphic setting you buy a whole another card but those who remember Nvidia Physx will know that people still did it

0
Rep
386
Offline
admin approved badge
13:38 Oct-23-2019

Dude you went all over the place. XD


Ok from what I'm getting, you are asking if ditching the RT cores and Tensor cores for more Cuda cores would result in better performance and scaling? The answer is yes and no. It will be much better general purpose GPU performance(I say general purpose GPU performance as GPUs are ASICs too, but much less than ever before) and would be much better than having RT and Tensor cores.


The thing is though Cuda cores do cunsume a bit more power than RT cores and especially Tensor cores, so a 754mm^2 die like the rtx 2080Ti would consume more power and heat, probably a 350W TDP card which in the high limit of standard GPU air cooling. But a 650mm^2 die GPU at 12nm should be 250-275W TDP.

0
Rep
386
Offline
admin approved badge
13:40 Oct-23-2019

And the problem with dedicated RTX cards for companies is that Nvidia are absolute Arseholes to work with and unlike normal GPUs Tensor Cores and Ray Tracing Cores don't have such wide and vague patents as the regular GPUs and on top of that Nvidia is not the first to have Ray Tracing and Neaural Network ASICs, so they can't monopolize them, so many other companies can make their own AI and Ray Tracing ASICs and many have.


Biggest example being Tesla. Nvidia tried to milk them with their AI chips and the folks at Tesla were no fools and they ditched Nvidia all together, made their own AI chips that are much better than Nvidia's and are now using them.


Otherwise for regular consumers dedicated Ray Tracing cards are a better solution, but we don't need ray tracing at all.

0
Rep
386
Offline
admin approved badge
13:44 Oct-23-2019

Designing Ray Tracing ASICs and ASICs in general is much simpler and cheaper and faster than designing GPUs or CPUs. Just like bitcoin ASICs. Ray Tracing has very generic instructions and very few at that, so it's not like it's expensive to develop. AI is harder, but again the math and logic behind it is not even of the complexity of the GPUs which are much simpler than CPUs.


Back on GPUs(My turn to go all ove the place). A 650mm^2 on 12nm (example name)gtx 2080Ti with ONLY Cuda cores(SMs to be more accurate as there are more things than just cuda cores), would be at least 40% faster than a gtx 1080Ti at 417mm^2, considering the gtx 2080Ti wouldn't clock as high as the gtx 1080Ti

0
Rep
386
Offline
admin approved badge
13:46 Oct-23-2019

Though sadly due to amdahl's law, going above 6000 cores for GPUs starts to have a big diminishing return as even graphics and graphics cards are not 100% parallel due to overhead, latencies, stalling and in general graphics not being truly 100% parallel.


That's why AMD went for higher transistor count Stream processors(CUs) for Navi and thus higher IPC, instead of putting more cores and that's why Nvidia went for higher IPC with maxwell and higher clock speeds with pascal and turing instead of putting more cores.


Also when it comes to ray tracing I remember the 5120core voltage being about as good as the rtx 2080Ti in ray tracing games, so you can definitely brute force through ray tracing if needed. XD

0
Rep
18
Offline
08:52 Oct-24-2019

interesting I was always under the impression that Graphics Scalabilty was much rather simple number of Cores+clock speeds sure i knew that certain types of cores are more jack of all trades and certain are much more specialized and super f

0
Rep
18
Offline
08:56 Oct-24-2019

fast at those specific task and flat out suck at the rest but I wasnt aware of amdahl's law literally had to google it right now. I was also aware that as architecture improves certain cores can perform way better with IPC improvement and F

0
Rep
18
Offline
08:58 Oct-24-2019

Manufacturing nm improvements but this is news is that why MULTI GPU sucks that much? i was always under the impression it was simply the optimization sure you cant get 100% 2nd gpu performance simply down to communication between both the

0
Rep
18
Offline
09:01 Oct-24-2019

cards but at 80% performance? ive seen Multi GPU performance to really be meaningless and as you add more cards the performance gets even crappier Ive seen a max 4 GPU system and as he added more the performance for the additional gpu start

0
Rep
18
Offline
09:03 Oct-24-2019

started going even lower that being said this was about 2012 I guess i think AMD had the 7000 series at the time i wanted to go multi gpu the shopkeeper demoed it on his system to say why we should avoid MULTI GPUS and i ended up picking th

0
Rep
18
Offline
09:06 Oct-24-2019

HD 7950 which was later rebranded to rx 280x (I Guess) is this why we keep seeing so many Architectural changes in GPUS so often as opposed to CPUS? to try and get an IPC boost?

0
Rep
18
Offline
09:08 Oct-24-2019

The Next GPU I consider I would want would have to have 1080p at 120fps ultra stable no issues sure a dip to 100 is realistic but thats it. then again with Next Gen consoles about to register a massive push i think Game Engines will make th

0
Rep
18
Offline
09:12 Oct-24-2019

that really difficult to achieve. i really want to get myself a High Refresh Rate Monitor and hopefully it be a 32:9 super ultra wide so exactly half of a 55inch 4K tv. Im gonna need alot of Pixel rendering power for that BEAST

0
Rep
386
Offline
admin approved badge
11:26 Oct-24-2019

Multi-GPUs suck for tons of reasons. First reason is they are connected through the PCI-E lanes and chipset. They can't have direct connection like for example Zen has with it's CCXs and chiplets as first that would require the die to extend to the edge of the card with those connections, second the distance is huge, so that's poor too and third it would be way too expensive.
Second reason is poor multi-GPU optimization as it's NOT entirely on a hardware level so software plays a big part in optimizing.
Third yes when you reach such huge thread counts you start seeing diminishing returns a lot.

0
Rep
386
Offline
admin approved badge
11:30 Oct-24-2019

Overall we need stronger cores for both CPUs and GPUs and NOT more cores. And sadly AMD is at this point selling us CPUs that are made for servers not for desktops... after the r7 1800x/R7 2700x we needed bigger cores not more cores... but nope. The current 8 core chiplets are tiny, just 80mm^2 and that's with double the L3 cache, but without the I/O... Zen2 is a server architecture design that's also competitive for the desktop...


Same with intel. except they kept 4 cores until 2017. The sandy bridge i7 2600k has a 255mm^2 die size, the i3 8100-8350k has 110-115mm^2 die size.

0
Rep
18
Offline
08:28 Oct-25-2019

Zen 3 is rumored to have 4 threads per core? can it be possible with Zen 4 or Zen 5 that they improve the quality of cores? or will we simply have to wait for a whole new Architecture then? at this point I am quite significantly out of my

0
Rep
18
Offline
08:31 Oct-25-2019

depth in CPU core knowledge but i am guessing what your saying is that Intel cores are so good because they are huge? I am guessing they are huge because of their Desktop Architecture philosophy of More faster Processing rather than More Si

0
Rep
18
Offline
08:35 Oct-25-2019

Simultaneous Processing which would make sense as to why their performance can change so dramatically with clock speed changes and their Processor size is a effect of that and not a cause. NO doubt they have tried to make multi core cpus on

0
Rep
18
Offline
08:38 Oct-25-2019

singular dies. now if I understood you correctly then If AMD eventually move to a smaller manufacturing process 5nm? would it be possible that amd rather than increasing the number of Cores like they did from their jump From 14nm to 7nm by

0
Rep
18
Offline
08:41 Oct-25-2019

going from 4 cores per chiplet to 8 cores per chiplet that they would then have the ability to maintain the 8 cores but try and improve their Single COre Performance? THat being Said isnt AMD's current SIngle core IPC comparable to that Of

0
Rep
18
Offline
08:42 Oct-25-2019

Intel?. that or Ive understood something wrong?

0
Rep
386
Offline
admin approved badge
11:11 Oct-25-2019

It's not just bigger cores. And IPC is NOT just a single metric unit, it's a combination of how many of each x86-64 instructions the CPU can do per cycle and for cache bandwidth and latency.


If for Zen1 there is as good of an optimization in compilers, frameworks, engines and no on as for Intel's architecture, it would have higher IPC overall than Coffeelake, excluding AVX 512, encryption and L1 cache bandwidth.


And intel's cores have less transistors than Zen, they are just bigger due to intel's 14nm+++ being bigger than TSMC's 7nm. When AMD was at Glofo's 12/14nm Zen1 and Zen+ had about as big cores as Coffeelake because it lacked 256 bit FPUs and used more 128 bit FPUs.

0
Rep
386
Offline
admin approved badge
11:14 Oct-25-2019

More of the same execution units + optimization of the architecture to compensate for the extra execution units(branch predictors, schedulers, out of order execution, etc, etc).
But then there is the other part, adding new instructions, which again increases core sizes, but not necessarily general purpose performance as it takes at least 3-5 years for new instructions or even improved versions of previous instructions to be utilized by compilers, engines and developers as whole.


For example both AVX 256 bit and AVX 2 512 bit were unused for a number of years outside of synthetics and some custom vendor software.


Overall bigger cores(as in higher transistor count cores) = better cores when done right and when not introducing new instructions.

0
Rep
386
Offline
admin approved badge
11:17 Oct-25-2019

AMD's clock speeds are entirely limited by the process node(Global Foundry's 12/14nm and TSMC's 7nm) the transistors of those process nodes are just too slow.
People may bash intel's 14nm+++ but it's only 20% bigger than TSMC's 7nm and only 15-25% less efficient, while being faster and scaling better.


AMD currently has a big decoder bottleneck in their Zen1 and Zen2 CPUs, but instead of solving it and getting 33-50% higher integer IPC, they are just making their cores retain the same Transistor Count, but shrink the die size through new process nodes, but doubling that L3 cache was a dramatic increase in transistor count and it's primarily for server use.


Zen2 is for all intents and purposes a server CPU first and a desktop/regular consumer CPU second.

0
Rep
386
Offline
admin approved badge
11:20 Oct-25-2019

And Zen 3 having 4 threads per 1 core just further solidates that AMD cares for the companies(for servers basically) and not the regular consumer and I'm not surprised that's where the majority of the money for CPUs are and soon for GPUs too...


As companies don't need single core performance nearly as much, if at all compared to regular consumers.


And I hope that Zen4 will have at least solved the decoder bottleneck, but I doubt it, that's why they are most-likely designing Zen3 to have 4x threads per 1x core.

0
Rep
386
Offline
admin approved badge
11:21 Oct-25-2019

And what sucks is that AMD's SMT is fake and castrated. For example IBM's SMT for each thread you get 90-100% extra performance as they have true SMT with true superscalar pipelines that work in parallel with advanced scheduling to fill the empty pipeline stages.


What AMD and Intel have are just the advanced scheduling that fill the empty pipeline stages and instead of making it hardware level, they have a simpler way where the bios and OS kernel see the extra schedulers and scheduling as extra threads and AMD's (fake)SMT gives up to 30% higher performance per core, while intel's Hyper-threading(they didn't call it SMT as they knew it's not) gives up to 25% higher performance per core.


And in general single-core performance should be tested with the (fake)SMT/HT included.

0
Rep
386
Offline
admin approved badge
11:25 Oct-25-2019

Now AMD can surprise us and make Zen3 have true SMT at least for one of the threads and the other two to be advanced scheduling for the first two, but I doubt it. The way I described it would mean that 1 core will have 1x thread that gives 90-100% extra performance and 2x threads that give up to 30% extra performance making one Zen3 Core(with all the threads), assuming it's got the same IPC and clock speeds as Zen2, have 2.0-2.6x times more performance than one Zen2 core(with all the threads).

0
Rep
386
Offline
admin approved badge
11:27 Oct-25-2019

But again we have no hardware level parallelism yet(IBM is working on it for example and some 3rd party engineering companies that are not fortunate enough to make their own CPUs, but do such designs that other companies license or even straight up borrow), so software optimization is needed for those extra threads to be used as if they were extra cores, which means that Amdahl's law is much heavier.


Amdahl's law when applied to CPUs and GPUs and other ICs expects perfect hardware level parallelism, so for CPUs especially adding the software optimization factor means that it's even worse than the deminishing return is even worse than it should be.

0
Rep
18
Offline
07:59 Oct-30-2019

thanks mate this was very educational I wasnt aware about Real And Fake MULTI THREADING. but how come Intel after coming up with Hyper Threading quite sometime ago havent come up with TRUE SMT yet or somthing rather near it or rather the Ne

0
Rep
18
Offline
08:01 Oct-30-2019

next step of Hyper Threading or are they only aiming for incremental updates with generations to get closer to it?

0
Rep
386
Offline
admin approved badge
08:47 Oct-30-2019

Hyper-Threading and AMD's (fake)SMT is just half of the scheduling of real superscalar SMT.
And I doubt we will any time soon though and even if they do, we first need hardware level parallelism. And then again above 16 threads is pointles for 99.9% of the software, games absolutely. Amdahl's law can calculate it. And even 16 threads won't give 100% scaling.

0
Rep
2
Offline
07:53 Oct-22-2019

It’s not a bad idea. It’s one of the few advancements of PC Gaming that differentiate them from console gaming and without those advancements we won’t get to see another generation of consoles. It’s the PC Gaming that truly push boundaries that they apply to the next generation of consoles.

0
Rep
9
Offline
21:12 Oct-21-2019

don't care about it that much.. current GPU can't support it and i can't afford a new one so im just happy that im able to play the games

0
Rep
105
Offline
20:49 Oct-21-2019

I think ray tracing is good, i mean hardware has to get faster and stronger, and so does the games that should take advantage of that hardware.On the other hand i think the idea of a game requiring raytracing to start shouldn´t be a problem in 2023, but if it happens in 2021, then i will be mad af because i dont plan on upgrading my whoile pc for the next 4 years, if my pc and i manage to survive xD.

0
Rep
-25
Offline
17:25 Oct-21-2019

I tried Quake 2 with raytracing, it was fun for about a hour comparing with or without the raytracong but after that i never cared at all.

0
Rep
30
Offline
17:49 Oct-21-2019

That's the thing with improving technology. After a short period, it becomes the norm and you don't notice it as much. It does become VERY noticeable however, when you go back to older tech.

0
Rep
272
Offline
admin approved badge
19:57 Oct-21-2019

I'd attribute it to...say...having played the game already, possibly many times over the years. I enjoyed the looks too, but Quake is Quake. Now Minecraft raytraced (the SEUS PTGI mod) - that gets a LOT of use on my end and looks amazing.

0
Rep
14
Offline
17:01 Oct-21-2019

I believe that as long as it's necessary having exclusive hardware dedicated to ray trace, will not be adopted on a massive scale.

0
Rep
356
Offline
10:50 Oct-21-2019

of course any technology is welcome to pc but as long we are getting also focus on more smooth gameplay 144hz must!

0
Rep
17
Offline
11:21 Oct-21-2019

There is a place inside a game menu that you might have heard of before,yes its the graphics menu,there you can tweak the visuals to get your 144fps,try it !

0
Rep
386
Offline
admin approved badge
12:59 Oct-21-2019

Try that in 5 years when you wouldn't be able to play games without having a Ray Tracing GPU and when Ray Tracing will be the only option.

2
Rep
105
Offline
20:50 Oct-21-2019

lol

0
Rep
272
Offline
admin approved badge
09:35 Oct-21-2019

Ha, I see what you did there with the topic, Jon (following a discussion on Discord) :)


Raytracing is obviously only a good thing. When it's implemented as a toggle - people should really not have anything to be angry about. Use it if you like it - don't if you want your game to run faster. Win-win. So far I've enjoyed it in games like Minecraft and Quake, so there's that :D

5
Rep
383
Offline
senior admin badge
10:18 Oct-21-2019

0
Rep
13
Offline
08:42 Oct-21-2019

I dont think its bad, but i also dont think its great from where it is now. Right now the performance hit you take is too great to justify the graphical enhancements most of the time. Also you need a really expensive gpu to make it a little worth it. I would wait for the next gen or gen there after.

1
Rep
17
Offline
03:19 Oct-21-2019

Raytracing itself is neither good nor bad... by itself it is just tech, completely neutral. Depends on the user to make something good with it.


What I don't like is the way nVidia went about it, though.

3
Rep
272
Offline
admin approved badge
09:37 Oct-21-2019

The way they were the first to introduce the tech that finally brings us real-time raytracing in games..?

0
Rep
106
Offline
admin approved badge
23:53 Oct-20-2019

It is ahead of its time, we still need the hardware but it definitely is a good thing to push for. since lately we haven't had much advancements in graphics until ray tracing came out

3
Rep
76
Offline
admin approved badge
22:10 Oct-20-2019

With raytracing, people really need to distinguish between raytracing and nVidia RTX. Raytracing is great for games, it really allows them to look great more conistently without it depending on hacky solutions developers need to make and tune to achieve similar effects with rasterization. And it does remove quite a lot of technical stuff out and allows developers to focus on creative process...

4
Rep
76
Offline
admin approved badge
22:13 Oct-20-2019

... of making games. And that is good, with raytracing we open a lot of possibilities in how games are made or perhaps even elements which weren't possible before. However, the bad one here is nVidias RTX implementation. Firstly it is bad because it is fully proprietary, so it only runs on limited amount of cards. Secondly, it tanks your performance, because it is very early implementation of...

4
Rep
76
Offline
admin approved badge
22:14 Oct-20-2019

... technology. Thirdly it came out as excuse to hike prices to higher amount and I personally don't think it was aimed as much to gamers as nVidia would have you believe, but more towards creators. But since nVidia is gaming company, they had to put gaming spin on it, though they are desperately trying to be more than gaming company. Still, performance is heavily effected, even with...

5
Rep
76
Offline
admin approved badge
22:19 Oct-20-2019

... limited amount of raytracing games do today, and you really need to drop to 720p with 2060 for it to be viable. And only 1080p 60FPS raytracing card they have is 2080Ti and I would dare to say that you must be insane to buy 2080Ti to do 1080p. Thirdly, even developers don't like it, because they still have to do quite a bit of work, it is not that "it just works" thing,...

4
Rep
76
Offline
admin approved badge
22:20 Oct-20-2019

... there already was one game that canceled raytracing implementation, but I just can't find which one it was... Google is flooding my searches with Minecraft results. Anyway, what developers are waiting is more open implementation they can use. Not this heavily proprietary stuff from company that likes to push people around. And when we get that, I doubt current RTX cards will run it well.

4
Rep
8
Offline
23:02 Oct-20-2019

I played metro exodus all maxed out with ray tracing on 1080p with around 80-90fps constantly

4
Rep
76
Offline
admin approved badge
21:55 Oct-21-2019

This changes nothing really, you paid 700USD+ for a card normally used for high FPS 1440p or even 60FPS 4k, to play at 1080p. Provided you didn't turn on DLSS. Is it just me or there is bit of an issue here, since you are doing something that used to be domain of 300USD, maybe up to 400USD cards. Just saying. Plus visual difference varies a lot, from none to small difference. Just saying.

0
Rep
272
Offline
admin approved badge
12:50 Oct-22-2019

The value is in the eyes of the beholder. If you don't like it - you won't find value in it and you will look at it from the eyes of that person (which is what you're doing). If you like it - you'll see the value.

0
Rep
1
Offline
21:45 Oct-20-2019

Imagine if this article was about GPU texture mapping or programmable shaders:
"Are textures bad for PC gaming?"
or
"Are shaders bad for PC gaming?"

9
Rep
31
Offline
21:33 Oct-20-2019

The technology is good, but we are still lacking in the hardware dept.

2
Rep
95
Offline
21:20 Oct-20-2019

I think its only bad if you think graphical development is better off focusing on something else. Maybe there is something better than raytracing. But at the very least it takes the focus off of ever increasing resolution.

1
Rep
386
Offline
admin approved badge
21:27 Oct-20-2019

It seems most people think so. Graphics, animations and visuals as a whole are the reason why games costs hundreds of millions and why we get so much meaningless, bad, filler, time waste content and so little meaningful, good content.


It is also why companies don't innovate as it's a huge risk with those huge budgets, gameplay and game mechanics have been stagnant for over a decade now same with level design, if anything they are watered down, simplified and mainstreamed so more people would play...


On top of that these high budgets are why we get microtransactions, DLCs, loot boxes and other bullcrap monetizations.


Games are there for the gameplay and mechanics, not for visuals.

2
Rep
386
Offline
admin approved badge
21:35 Oct-20-2019

I will give you an example:
CoD4 had 13 programmers, 17 game designers, 18 artists, 9 animators, 4 audio engineers and 3 producers and 7-8 voice actors, that's 68-69 people.


CoD Black Ops 4 has about 300 artists from treyarch alone, let alone raven software, and the 3 other studios involved(not counting beenox as they just ported the game to PC)... Programmer count? About 40-50...
And this is a game without a campaign/singleplayer...
About a hundred producers and directors as well for BO4...

1
Rep
95
Offline
02:14 Oct-21-2019

To each his own. There was certainly a time I would have said visuals are unimportant.
Now though, gaming for me is more than just playing a game. It also scratches my itch on tech, which goes hand in hand with advancements in hardware and graphics. Plus I play more single player and to relax, making graphics and immersion as important as gameplay.

1
Rep
95
Offline
02:21 Oct-21-2019

Not sure I agree on the correlation with microtransactions. Fortnite aint exactly a graphical achievement.
The correlation with making a game "mainstream" is also hard for me to see. Even a game with simple graphics faces the temptation or question of whether to be more mainstream or be revolutionary.

1
Rep
386
Offline
admin approved badge
10:33 Oct-21-2019

Now greed is a big part of it. But if games costed 90-120$ since 2012-2013 then there would be serious backlash for micro-transactions.


And Fortnite didn't have micro-transactions until it went free to play with the battle royale game mode. And those micro-transactions are sadly normal for F2P games, but the battlepass system in fortnite is the least terrible micro-transaction system till date. You can literally earn it back by playing the game and just pay once for it and get all the future ones for free with enough game time. Even CoD is implementing it now.

0
Rep
386
Offline
admin approved badge
10:37 Oct-21-2019

Otherwise, as game dev costs increase and the price stays at static 60$ they will have to cut up the game for DLCs, introduce micro-transactions and loot boxes.


For example, Square Enix lost money from Rise of the Tomb raider and Shadow of the Tomb Raider, even though both sold a few million copies in their first year. They barely broke even with Tomb Raider 2013. Keep in mind that the average price for a AAA game in it's first year is 45-50$, not 60$ as not everybody gets it on launch or the first couple of months and unlike steam, game prices on consoles start going down after the second month.

0
Rep
386
Offline
admin approved badge
10:39 Oct-21-2019

Sales beyond the first 6 months don't matter much either. Rise of the Tomb Raider as of late 2018 has sold only 3.5 million copies for a 3 year duration...that's just not enough cover a game made by 300+ people with a minimal(bottom 20%) average developer/employee cost of 10 000 usd a month(120 000 usd a year) per person for a 2.5 year development cycle.
And then have some money for the next more expensive game.

0
Rep
386
Offline
admin approved badge
10:42 Oct-21-2019

Otherwise I used to care about the graphics too, at the bottom of my priority list, but still I did, until I found out that thanks to them games cost hundreds of millions of dollars...
I used to think that just like GPUs and CPUs graphics get better and remain at the same price or even lower price as time went on, because of all the improvements in the graphics tools, I was wrong.


And games are meant to be played, not watched, that's what movies are for. We as gamers NEED for them to scale down game budgets back to the mid 2000s or we will keep getting simpler, blander, more watered-down games with each sequel or new IP.

0
Rep
386
Offline
admin approved badge
10:44 Oct-21-2019

The majority of games have barely, if at all improved since the mid to late 2000s be it sequels or just games similar to those back in the day. Only the graphics, but on the other hand the quality and quantity of content has suffered. And as I said above the gameplay and mechanics have barely improved. There are exceptions of course, but in general they haven't.


All for what? Graphics? Nah... but again people prioritize graphics for some reason, as if graphics make a game better. :/

0
Rep
95
Offline
14:16 Oct-21-2019

Well like I said, to each his own! I certainly have things I would rather devs not spend time/money/effort on.
Couple things tho:
What graphical advancements contribute the most to rising cost? I'm thinking ray tracing might reduce man hours once it's mainstream.
On a related note, there might need to be a distinction between increasing budgets for graphics and graphical advancements.

1
Rep
95
Offline
14:19 Oct-21-2019

Games can certainly be jaw-droppingly gorgeous without being cutting edge just because of the sheer amount of time, effort, and money spent on its artistic aspects. And the reverse can be true as well.

1
Rep
386
Offline
admin approved badge
14:34 Oct-21-2019

Higher quality graphics take more time to be drawn/scetched or whatever in the 3d modeling softwares, animated and integrated and it takes more advanced and complex engines to run them. It's like drawing an anime character vs drawing a realistic fully detailed human, except it's even harder and slower.


And many people don't seem to pay attention to how much visual objects there are in modern games compared to older games. Nowadays there would be hundreds of objects on the screen at a time, back in the day it was basically a tunnel, or if it was opened world it was flat or almost flat ground with textures and textured rectangles, unless it was a more important building then there was som depth to it


Nowadays modeling a can of gas would take longer than modeling a building 15 years ago

0
Rep
386
Offline
admin approved badge
14:37 Oct-21-2019

And sadly the tools for making graphics haven't gotten much better, or should i say they have gotten much better, but graphics have outpaced them tenfold.


As I said nowadays there are hundreds of artists and animatiors and motion movement engineers and cutscene artists, whereas back in the day 10-30 were enough for an entire AAA game.


And I agree, I personally prefer good art style and graphical design with much more unrealsitic and lower quality and technologically worse graphics to realistic graphics even if they get the realistic graphics to be indistinguishable from real life. If anything realism shouldn't be the golden standard in graphics, creative and unique design and artstyle should be.


But that said the general public cares only for the technologically advanced graphics

0
Rep
386
Offline
admin approved badge
14:38 Oct-21-2019

And ray tracing won't make it faster to light up a place at all, if anything it will be slower, until some automated system based on AI or algorithms is made, but that can be said about the current fake methods of lighting, reflections, shading, etc, etc.

0
Rep
386
Offline
admin approved badge
14:41 Oct-21-2019

Nvidia's demo of just plop the light source in and it's done is just not realistic to anyone who wants to make good looking environments. You can do that in source engine too and yeah it won't be in real time, it's source engine after all, but it will render it and boom, light source with reflections and shading, but it won't look good. Same will be with Ray Tracing, if not harder as it's more complex and it actually makes surface properties of textures matter much more and much more complex and objects become light sources of their own instead of just bouncing off them and what-not. I haven't used ray tracing and I don't know the details in depth but it's far more complex than what we currently use.

0
Rep
116
Offline
15:10 Oct-21-2019

You don't know the details in depth, by the looks of it, not at all. It is very interesting technology and I'd advise you to actually look into it more. Realtime RT can offer faster, better, likely even easier to implement lighting sources than what we use now even though the technology behind it is a lot more complex. Not only that, it can also cut down on the work to, so to say forcefully, simulate the intricate and small details that would generally take a lot of time.

0
Rep
386
Offline
admin approved badge
21:27 Oct-21-2019

I don't know the details indeed, but Nvidia's demonstration of just putting a light source in and being done with it is unrealistic.

0
Rep
116
Offline
22:07 Oct-21-2019

It really isn't once you know how it works. Presentation is enveloped in marketing speak but it's not that far removed from the actual use Real time RT it can provide.

0
Rep
83
Offline
20:46 Oct-20-2019

I love it but the gpu's aint quite there yet, my Msi RTX 2080 Ventus cant run it well for shadow of the tomb raider or Control, i leave it off for those games as id rather have constant 60+fps and all the other eye candy on

1
Rep
8
Offline
19:54 Oct-20-2019

I just hope AMD makes a separate card for dedicated ray tracing cores so we don't have to have them build directly onto the GPU die. Then an NVidia/AMD/Intel card would work with a AMD ray tracing card next to it making it a little less proprietary software. If they do this I would possible jump onto the ray tracing ship sooner, but for now its just too expensive for what you get when the only card that can run it well enough is the RTX 2080 Ti. Lets hope :)

6
Rep
80
Offline
admin approved badge
20:59 Oct-20-2019

i actually really like that idea

0
Rep
386
Offline
admin approved badge
21:16 Oct-20-2019

Yup that's what I suggested too when I first heard about ray tracing, but back then consoles weren't supposed to have ray tracing and back then nobody was talking about Ray Tracing only games starting 2023... But at least until then sure.

0
Rep
93
Offline
22:01 Oct-20-2019

This does seem like a much better method.

0
Rep
11
Offline
19:54 Oct-20-2019

Anything that makes games more realistic is good. Funny that many people were complaining there hadn't been any advances lately, then when they got one they complained about it because of performance which will get better with time.

2

Can They Run... |

| 60FPS, Ultra, 1080p
Ryzen R5 1600 Radeon RX 580 Sapphire Nitro+ 8GB 16GB
0% No [1 votes]
| 60FPS, Ultra, 1440p
Ryzen 7 5800X 8-Core 3.8GHz GeForce RTX 3090 Zotac Gaming Trinity 24GB 32GB
| 60FPS, High, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce RTX 3050 16GB
| 30FPS, High, 1080p
Ryzen 5 2600 GeForce GTX 1660 Gigabyte OC 6GB 16GB
| 60FPS, Low, 1080p
Ryzen 5 5500U 6-Core 2.1GHz GeForce GTX 1650 16GB
| 60FPS, High, 1440p
Ryzen 7 5800X 8-Core 3.8GHz Radeon RX 6900 XT 16GB 32GB
| 60FPS, Medium, 720p
Core i5-10300H 4-Core 2.50GHz GeForce GTX 1650 8GB
| 60FPS, High, 1080p
Core i9-9900K 8-Core 3.6GHz GeForce GTX 1060 Gigabyte Mini ITX OC 6GB 32GB
50% Yes [2 votes]
| 60FPS, High, 1080p
Ryzen 5 3600 6-Core 3.6GHz Radeon RX 5700 PowerColor Red Dragon 8GB 16GB
| 60FPS, High, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce RTX 3050 16GB
0% No [1 votes]
| 60FPS, Ultra, 4k
Core i9-9900K 8-Core 3.6GHz GeForce RTX 2080 Ti Asus ROG Strix OC 11GB 32GB
| 30FPS, Ultra, 1440p
Ryzen 5 2600X 6-Core 3.6GHz GeForce GTX 1080 16GB
100% Yes [1 votes]
| 60FPS, High, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce RTX 3050 16GB
| 60FPS, High, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce RTX 3050 16GB
100% Yes [1 votes]
| 60FPS, High, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce RTX 3050 16GB
100% Yes [1 votes]
| 60FPS, High, 1080p
Ryzen 3 3100 4-Core 3.6GHz GeForce RTX 3050 16GB
0% No [1 votes]
| 60FPS, Ultra, 1080p
Ryzen 5 5600X 6-Core 3.7GHz Radeon RX 6700 XT 12GB 32GB
| 30FPS, Low, 720p
Core i3-2367M 1.4GHz Intel HD Graphics 3000 Desktop 4GB
| High, 1080p
Ryzen 5 2600 GeForce GTX 1070 Ti MSI Gaming 8GB 16GB
100% Yes [1 votes]
Core i7-7700K 4-Core 4.2GHz Intel HD Graphics 630 Mobile 24GB
0% No [1 votes]