Intel Core i9-9900KS offers 8-core 5.0 GHz all-core boost speeds, world's fastest gaming CPU

Written by Jon Sutton on Mon, Oct 28, 2019 5:33 PM

Intel has officially announced the Intel Core i9-9900KS 8-Core 4.0GHz, a special edition processor which will be heading to store shelves near you from October 30th. The high-end chip will be priced at $513 and it’ll actually be a limited edition part so if you want in, you may want to make up your mind early.

The Intel Core i9-9900KS is based on the exact same 14nm+++ Coffee Lake process as the current Intel Core i9-9900 8-Core 3.1GHz, sporting 8 Cores / 16 Threads. Intel has simply picked out the best binned silicon and re-appropriated it for the Core i9-9900KS. That is to say, the Intel Core i9-9900KS will have guaranteed faster performance and is capable of faster overclocks than your typical Core i9-9900K.

Users will benefit from a sizeable 400 MHz bump to the base clock speed, albeit at the cost of guzzling down some extra juice with its 127W TDP. The 9900KS has the same 5.0 GHz boost frequency as the 9900K yet it achieves this across all cores, while the standard 9900K can only achieve 5.0 GHz across two cores. 

The Intel Core i9-9900KS should also have great overclocking capabilities, although again this is going to come down to the quality of the specific chip a user ends up with. Whatever the case, 5GHz all-core boost frequency on 8 cores is formidable, and at $513 this processor is only about $20 more than a typical Core i9-9900K retails for.

Intel’s direct competitor at this price is AMD’s Ryzen 9 3900X. Priced at $499, the Ryzen 9 3900X benefits from an additional four cores, rocking 12C/24T. However, it’s also got lower clock speeds of 3.5GHz base and 4.7 GHz boost frequency. For heavily threaded applications this should mean the Ryzen 9 3900X is still the performance champion, although the lightning fast clock speeds will mean the Intel Core i9-9900KS is an absolute beast in terms of gaming performance. If PC games are your priority, this CPU right here should be the best of the best.

The Intel Core i9-9900KS will be available to order from this Wednesday, October 30th, priced at around $513.

What are your thoughts on this imperious new processor then, are you tempted to drop five big ones on ensuring you have fastest gaming CPU around? Or is the Ryzen 9 3900X still the value-for-money champion with its eight extra threads? Let us know below!

Our favourite comments:

Basically you just pay more for better binned 9900K, which you technically can overclock. Though it makes me wonder how hot KS gets. Still this is CPU for fairly niche market of people who walk into store and say "give me the best, money is not an issue". Otherwise there are better alternatives in terms of bang for buck and you really need 2080Ti at 1080p to see the difference.

Seth22087

Login or Register to join the debate

Rep
5
Offline
07:58 Dec-13-2019

i got my 9600k to 5ghz. its amazing. i cant see needing much more than that though...

0
Rep
-1
Offline
02:01 Oct-30-2019

so this is where their binned cpu goes

0
Rep
1,041
Offline
senior admin badge
11:52 Oct-29-2019

5GHz 8-core 16-thread cpu is minimum I expect as a step forward from my 4.3GHz 6-core 12-thread i7,
time will tell when upgrade happens :)

0
Rep
386
Offline
admin approved badge
12:35 Oct-29-2019

Nah, 5Ghz is pointless, it's way too hard and expensive to make such fast process nodes.
3-4Ghz is good and increase the IPC to the heavens.


At this point tranistors can't get much faster without them completely dumping their process nodes and starting from scratch(would take years and a dozen billion dollars) and even then we don't know how much faster they'll get.

4
Rep
1,041
Offline
senior admin badge
12:39 Oct-29-2019

maybe, but come on 5GHz sound cool at least

0
Rep
386
Offline
admin approved badge
12:57 Oct-29-2019

Yeah so many people fascinated with that stupid 5GHz mark... it's annoying, pointless and meaningless... You saw what happened when intel was fascinated with the 10GHz mark... the pentium 4 happened and it didn't even hit 5Ghz. XD

4
Rep
1,041
Offline
senior admin badge
13:27 Oct-29-2019

on the other hand Pentium 4 HT was LIT AF - the only cpu I saw burn IRL

1
Rep
386
Offline
admin approved badge
14:07 Oct-29-2019


I've seen it happen in real time when I was in the 6th grade one of the school PCs was so dusty and with a stock cooler and with a pentium 4 HT and it literally started smoking. :D

3
Rep
160
Offline
09:15 Oct-29-2019

unless you're playing at 720p low with a 2080 ti what's the point of this cpu

0
Rep
164
Offline
09:57 Oct-29-2019

why 720p low?
lot of people play on 4k ultra

0
Rep
386
Offline
admin approved badge
10:40 Oct-29-2019

the idea is that at 720p low settings this CPU might get higher FPS than other CPUs.

0
Rep
17
Offline
admin approved badge
17:50 Oct-29-2019

I play 4K ultra. Mostly CPU based games like WoW. My 1080 Ti FTW3 maxes usage more than my threadripper 2950x does lol.

0
Rep
386
Offline
admin approved badge
17:54 Oct-29-2019

WoW has been super well optimized since Draenor, while graphics have been improving with each expansion. :D


But what @Cistamlaka meant to say is that the i9 9900KS won't give you extra performance unless playing at 720p low settings because the lower the resolution and the lower the settings, the more CPU load increases which leads to bottlenecks from the side of the CPU and not the GPU.

0
Rep
17
Offline
admin approved badge
17:58 Oct-29-2019

And with multithreading, it uses 8 cores as well. They've done an amazing job.
Oh I'm aware. Its like a balancing act. GPU and CPU load. Intel still hasn't figured out that the future is multithread and not faster clock speeds.
Let's use WoW as the perfect example. When 8.1.5 hit with multithread optimizations, peoples fps with 6+ cores increased by 15-30%. That says something right there. There is a a bigger increase to be gained by multithreading games.

0
Rep
386
Offline
admin approved badge
18:54 Oct-29-2019

Sadly for games, above 16 cores we will get huge diminishing returns. Amdahl's law of parallel workload demands it.
Games are way too linear and have way too linear and sequential logic in them to be parallelised well.
Now games are more than one program at a time.
A game is usually the game logic, the API, physics engine and effects engine, so there is 100% scalability with up to 4 cores/threads. But the API is linear. Game logic is at most 50-75% parallel, Physics should go on the GPU, but they are linear due to the game logic being linear, effects too.

0
Rep
386
Offline
admin approved badge
18:58 Oct-29-2019

Even graphics are not 100% parallel, there is always overhead and theoretically, they are not 100% parallel, they are about 99-99.9% parallel, so GPUs with more than 5000-7000 cores(stream processors/cuda cores) start to see very severe diminishing returns.


Which is exactly why Nvidia and AMD went for higher IPC with Maxwell and RDNA respectively instead of increasing core counts further, also why the Vega and Fury GPUs all cap at 4096 cores(still below the threshhold, but still).

0
Rep
386
Offline
admin approved badge
18:58 Oct-29-2019

Bottom line we need Stronger Cores.

0
Rep
17
Offline
admin approved badge
15:18 Oct-31-2019

Silicon is reaching it's limit. You can only do so much. Moore's Law has slowed to a crawl.

0
Rep
386
Offline
admin approved badge
15:26 Oct-31-2019

Bullcrap, cores have been getting smaller and smaller each year... I understand if a quad core coffeelake or Zen2 was 250-300mm^2 like they used to be back in 2011, but Coffeelake quadcores are 115mm^2...

0
Rep
386
Offline
admin approved badge
15:27 Oct-31-2019

Moore's law isn't dead, they just stopped giving a crap about general consumers and started making ONLY server CPUs with weak cores, but many of them... and they are selling them to us...

0
Rep
17
Offline
admin approved badge
15:40 Oct-31-2019

I respectfully disagree. Smaller doesn't mean more powerful or that the gains asides from efficiency are meaningful. It's all about the process node. How much for IPC and clocks you can squeeze out of it, etc. There we are reaching a limit. An element called germanium would be superior to shrink and increase transitor count/speeds/ipc all at once.

0
Rep
386
Offline
admin approved badge
15:45 Oct-31-2019

IPC = more transistors = bigger cores. We haven't seen any real IPC improvements from AMD and Intel due to their transistor count staying the same.


IF 1 billion transistors at 14nm take 100mm^2 then at 7nm they take roughly 66mm^2. If they actually improved IPC then their transistor count would go up and their die size PER CORE would remain the same.
Intel's sandy bridge quad core had a die size of 255mm^2, Coffeelake quad core has a die size of 115mm^2... thus transistor count has remained almost the same or they have barely increased it...

0
Rep
17
Offline
admin approved badge
15:51 Oct-31-2019

Right, and from my understanding, one of the reasons transistor count hasn't increased very much, and we've gone instead to decreasing cache latencies, floating point unit improvements etc, is because Moore's law has indeed slowed down. The goal should be to decrease die size without affecting performance while simultaneously increasing transistor count substantially. That is part of Moore's law. But that requires a good node...which can't be had on silicon any longer.

0
Rep
17
Offline
admin approved badge
15:53 Oct-31-2019

IBM is experimenting with new materials such as Germanium for that very reason my friend.

0
Rep
386
Offline
admin approved badge
15:54 Oct-31-2019

Dude neither IBM nor ARM are having problem with Moore's law... It's just AMD and Intel. and they are not having problem with it, they are just focusing on more smaller/weaker cores.
Clock speeds on the other hand have hit a limit, we need something other than Sillicon to achieve much higher clock speeds.

0
Rep
386
Offline
admin approved badge
15:56 Oct-31-2019

They are experimenting with Germanium, but that's for the future, not for now and CPUs haven't really been pure sillicon for over a decade so yeah, it's not like it's just Sillicon.

0
Rep
17
Offline
admin approved badge
18:35 Oct-31-2019

That's because IBM ditched the consumer market and is no longer bound to it. Progress slows when you have bean counters telling you you can't do things. AMD and Intel should be making CPUs with 16 cores, I think it should stop at 16 for the consumer/prosumer segment however. What I wish they would do, is stop this core count madness and just make the cores stronger now(focusing on that).

0
Rep
17
Offline
admin approved badge
18:38 Oct-31-2019

The future is ultimately quantum computing. We need to make atomic particles work in the meantime, but subatomic particles are vastly superior. Instead of 0s and 1s, we can hit things in between such as .1, .2 etc.

0
Rep
97
Offline
admin approved badge
20:00 Oct-31-2019

Nahh. The distant future is optical computing.


Anyway, graphite is looking better and better.

0
Rep
386
Offline
admin approved badge
07:16 Nov-01-2019

Bingo, bango, that's exactly why AMD, Intel and Nvidia have slown down their progress to almost a halt. It's not engineers that are making the decisions it's the dumbarse executives and accountants. Zen2 was literally the cheapest possible upgrade from Zen1, while Zen+ was what Zen1 should have been from the get go, but wasn't, so that they can have a new product and funnily enough just like with intel until a few years ago, people literally upgraded from Ryzen 1000 to Ryzen 2000 series of CPUs, like they did from i5/7 2000 to 3000 series and then 4000 series.


The consumer IBM market they ditched was full of Intel and AMD CPUs. They were just doing OEM PCs, they never had their own CPUs in there. And they predicted the downfall of PCs so yeah, good on them.

3
Rep
386
Offline
admin approved badge
07:21 Nov-01-2019

And yes they should stop at 16 cores, but for the near future 8 is what we should want, 6 is all we need and they should be getting stronger and there is no good reason for them not to be getting stronger, except for corporate greed.


When it comes to quantum computers it will take a couple of decades before they are in our hands.

1
Rep
13
Offline
08:47 Oct-29-2019

I need to change out my trusty old 4790K some day to be released from this old socket and slower rams. Some day, not today though ;) it still does the jobb and I havent been limited yet by it so...

2
Rep
4
Offline
10:54 Oct-29-2019

Have you played Assassin's creed odyssey?

0
Rep
13
Offline
13:03 Oct-30-2019

No I haven´t. Rough on the CPU?

0
Rep
4
Offline
17:17 Oct-31-2019

Oh yeah! i am 100% most of the time. maybe you will do better but with trx 2070 you will also be 100% on cpu for sure. and there why when i only play this game for the time being i feel like i need to upgrade

0
Rep
13
Offline
20:08 Nov-02-2019

Yeah no I don´t think so, 2070 Super actually but yes

0
Rep
4
Offline
21:55 Nov-02-2019

Trust me. Atleast my gpu is also 100% yours wont even be close 100%

0
Rep
97
Offline
admin approved badge
07:48 Oct-29-2019

It comes with all the benefits of being vulnerable. What a steal!

1
Rep
386
Offline
admin approved badge
08:46 Oct-29-2019

The vulnerabilities are meaningless to regular people, companies should be the ones who should be worried

2
Rep
97
Offline
admin approved badge
19:45 Oct-29-2019

But the patches are forced onto us anyway.

0
Rep
386
Offline
admin approved badge
19:49 Oct-29-2019

True.
One of the ISPs in my country got hacked due to Intel's vulnerabilities. :D

0
Rep
97
Offline
admin approved badge
04:46 Oct-30-2019

OOF

0
Rep
386
Offline
admin approved badge
06:24 Oct-30-2019

My friend calls them:
My friend: I'm constantly dropping packets.
The ISP: I wish we I had your problems,we are getting hacked! Good bye.

2
Rep
1,041
Offline
senior admin badge
11:53 Oct-29-2019

offline = no vulnerability

1
Rep
386
Offline
admin approved badge
13:04 Oct-29-2019

0
Rep
0
Offline
07:44 Oct-29-2019

AMD is taking market share from them, getting close in gaming benchmarks and their moronic response is release a CPU even more overpriced. Imagine buying this CPU. How moronic and c unts people can be. To be screwed in the ass over and over and to come back time and time again for more. How can one buy this CPU when you can buy from competition CPU with cooler, MOBO and 16 GB RAM at the same cost?

2
Rep
386
Offline
admin approved badge
13:05 Oct-29-2019

Well the Ryzen 3000 and RX 5700 series CPUs and GPUs are massively overpriced as well, but people are buying them, no?

2
Rep
0
Offline
08:01 Oct-31-2019

But the 3700x cost as much as the 1700 when i bought it. So why it is overpriced? The third ryzen series is cheaper then the first ryzen series on release. If ryzen is massively overpriced then what is Intel?
Even If if it were true my point still stands that buying an Intel CPU over an AMD CPU with cooler, MOBO and 16 GB RAM is massively moronic.

0
Rep
386
Offline
admin approved badge
08:28 Oct-31-2019

Because first the R7 3700x is the equivalent of the r7 1700, do not get fulled by marketing naming schemes.
Also at 7nm the r7 3700x should cost as much as the r5 3600 costs currently - 200$ MSRP(even if it costs more in europe, but still, MSRP is 200$). 7nm turned out to be a little more expensive than 14nm was in 2017 per mm^2, but the r7 3700x is much smaller than the r7 1700, literally half the die size if we equate the 12nm I/O to 7nm and 12nm in 2019 is dirt cheap, so it might even be cheaper.


And yes currently there is no reason to buy an intel CPU over an AMD CPU.

0
Rep
16
Offline
22:07 Oct-28-2019

I literally just bought the 9900k lmao, although this KS version would've been 100€ more expensive.

2
Rep
76
Online
admin approved badge
22:04 Oct-28-2019

Basically you just pay more for better binned 9900K, which you technically can overclock. Though it makes me wonder how hot KS gets. Still this is CPU for fairly niche market of people who walk into store and say "give me the best, money is not an issue". Otherwise there are better alternatives in terms of bang for buck and you really need 2080Ti at 1080p to see the difference.

5
Rep
76
Online
admin approved badge
22:06 Oct-28-2019

Personally I would take Ryzen 7 3700X over this any day. Yes it will be like 10% slower in games, but it really won't matter that badly, since I would also never buy 2080Ti, since it is just another highly overpriced product, that is priced to Titan level.

1
Rep
83
Offline
19:12 Oct-28-2019

Im happy for the next few years, tbh i think the games are the bigger issue, so many unoptimized games over the past few years, buggy, glitchy call it what you want, im getting fedup needing patches to make games better

6
Rep
386
Offline
admin approved badge
19:16 Oct-28-2019

dude developers want faster CPUs so they don't have to optimize as much, obviously optimization will get worse. :D
Otherwise if there still was the same level of optimization as there was during the PS1/N64 days we wouldn't need anything beyond an athlon 64 and core 2 duo, but game dev time would be a decade or more. With those old engines where you always had to code for everything, couldn't just have a level/world editor, and then writing in C and assembler compared to C++ and C#... ages I tell you, it would take ages longer. :D


Glitches on the other hand are just bad.
Also, if you like RPGS... The outer worlds, go play it, just go.

0
Rep
97
Offline
admin approved badge
21:19 Oct-28-2019

The only engines widely used now is Unity which is built with C# and Unreal Engine 4 which is built with C++ but also uses blueprints to make things easier.

0
Rep
386
Offline
admin approved badge
21:37 Oct-28-2019

Exactly. Well ubisoft's AnvilNext is C++ and (less so)C# for the gameplay and C# for the engine tools.(That's what they told me, but Wikipedia says it too, just doesn't clarify what language is for what).
UE might get C# scripting, but probably no sooner than UE5.
Source is C++ entirely, but that engine is old as hell.
Factorio's engine is C# entirelly.
CryEngine is C++ written, but it's C# and Lua scripting. How I wish Crytek didn't close the studio in my country. :/
Here:
https://en.wikipedia.org/wiki/List_of_game_engines

4
Rep
97
Offline
admin approved badge
07:48 Oct-29-2019

Damn. That's crazy. Though i don't see Epic Games going to c#. And i especially don't see Unreal Engine 5 coming out anytime soon. Remember, Unreal Engine 4 has been around since 2012 and they continuously update it with new features. It's surprising just how well it still runs.


Cryengine V is still very difficult to use.

0
Rep
386
Offline
admin approved badge
07:57 Oct-29-2019

That's why it doesn't have c# imo, and why when 5.0 come out it should have, but yes I don't expect it any time soon.

0
Rep
386
Offline
admin approved badge
18:27 Oct-28-2019

being the fastest doesn't matter when alternatives are more than fast enough and just behind the fastest... If you take into a count the price, how much it heats, the platform costing more, it's just not worth it compared to Ryzen CPUs, hell not even worth compared to i5 8000/9000 and i7 8000 series CPUs...

9
Rep
35
Offline
18:19 Oct-28-2019

about 100$ more when the i7-4790k came out so. Honestly if i wasnt broke i would upgrade.

1
Rep
386
Offline
admin approved badge
18:26 Oct-28-2019

Except that an 8 core coffeelake on 14nm+++ in 2019 should cost as much as an i7 4790k did in 2014.

4
Rep
17
Offline
admin approved badge
18:04 Oct-29-2019

Especially when Intel is going to throw 8 cores out the window with comet lake. They are going to 10 cores. It's a horrible time to upgrade with more cores and DDR5 on the horizon.

0
Rep
386
Offline
admin approved badge
19:23 Oct-29-2019

I personally will probably get a Ryzen 4000 series CPU next year or in 2021 with 8 cores/16 threads and be set for a while. I aim for 30-60fps so it's not a big deal to me anyway.
What I really need is a new GPU, but currently the GPU market is just as bad if not worse than the CPU market... :/

0
Rep
17
Offline
admin approved badge
15:14 Oct-31-2019

I would wait until 2021 man. You will be punching yourself because DDR5 will be the new thing in 2021. I aim for 4k Ultra 60 fps. More frames make no difference to me. I play WoW, fighters and rpgs. None of those need crazy frames.

0
Rep
35
Offline
06:33 Nov-05-2019

I will for sure wait for when DDR5 comes out to upgrade my pc for sure.

0
Rep
16
Offline
18:05 Oct-28-2019

And this CPU is for... whom, exactly? At which point did 9900K users think "Yeah, but I am still missing these extra 2 or 3 FPS."? It sure sounds like a great gimmick, but that is all it is. Only people upgrading from an I7-3770K or older would even consider it.

4
Rep
-6
Offline
17:39 Oct-28-2019

and the cpu costs as much as whole low budget pc you could build from that much money

4

Can They Run... |

| 60FPS, Ultra, 1080p
Ryzen 5 5600X 6-Core 3.7GHz GeForce GTX 1070 Gigabyte G1 Gaming 8GB Edition 32GB
| 60FPS, Low, 1080p
Core i7-9750H 6-Core 2.6GHz GeForce GTX 1650 Mobile 16GB
| 60FPS, Medium, 1080p
Core i5-4690 3.5GHz Radeon RX 580 Sapphire Nitro+ 8GB 16GB
| 60FPS, High, 1080p
Ryzen 5 2600 GeForce RTX 2060 Super 8GB 16GB
Ryzen 5 2600X 6-Core 3.6GHz GeForce RTX 2060 6GB 16GB
80% Yes [5 votes]
| 30FPS, Low, 720p
Core i7-7500U 2-Core 2.7GHz GeForce 940MX 2GB 16GB
100% Yes [2 votes]
Core i5-4590 3.3GHz GeForce GTX 1650 12GB
100% Yes [1 votes]
| 60FPS, Ultra, 1440p
Core i5-10400F 6-Core 2.90GHz GeForce GTX 1080 Ti Inno3D Twin X2 11GB 16GB
33.3333% Yes [3 votes]
Core i3-9100F 4-Core 3.6GHz GeForce GTX 1650 Super 4GB 8GB
| 60FPS, Medium, 1440p
Core i3-9100F 4-Core 3.6GHz GeForce GTX 1650 Super 4GB 8GB
100% Yes [1 votes]
Ryzen 3 3100 4-Core 3.6GHz GeForce GTX 1060 MSI Gaming 3GB 16GB
100% Yes [2 votes]
Ryzen 9 4900HS 8-Core 3.0GHz GeForce RTX 2060 Max-Q 16GB
100% Yes [2 votes]
| 60FPS, High, 1080p
Core i7-10700 8-Core 2.90GHz GeForce RTX 2060 6GB 16GB
100% Yes [2 votes]
| 60FPS, High, 1080p
Ryzen 5 2600X 6-Core 3.6GHz GeForce RTX 2060 6GB 16GB
66.6667% Yes [3 votes]