Intel Explains Why CPU Clock Speeds Haven't Grown and High Core Counts are the Priority

Written by Jon Sutton on Sat, Feb 24, 2018 2:00 PM

Ever wondered why CPU clock speeds have stayed much the same over the last decade? Back in the 90’s and early 00’s, the big push from AMD and Intel was in upping their clock speeds, reaching the hallow 1.0GHz mark before pushing up to 2 and then eventually 3.0GHz by 2003. And then, well, it just stopped being a priority. Both AMD and Intel opted to focus on multiple cores, and the promise of 10GHz CPU cores seems as distant now as it did 10 years ago.

Intel’s Victoria Zhislina has explained exactly why this is the case in a new blog post on Intel’s Developer Zone.

First and foremost - temperatures. The higher the clock speed frequency, the greater the heat emissions. Clocking processors too high could result in a physical meltdown for a CPU core, and the faster a CPU is, the more unreliable it can come. We see this with overclockers who regularly break the 7GHz barrier; liquid nitrogen must be used in order to keep CPU temperatures as low as humanly possible.

Heat is the most basic reason why, but behind the scenes, there are also numerous other issues to contend with.

The main limitation is to be found at the conveyor level of a CPU’s architecture. This is an integral part of the superscalar structure used by x86 CPUs, and refers to how any instructions must be executed sequentially by the CPU.

Each part of an instruction can be executed by a different computing device, and any given device is only free to tackle a new instruction once it’s been passed on to the second device. The more cores and threads you have, the more instructions can be handled simultaneously. Meanwhile, the clock speed dictates how often the clock ‘ticks’ and passes the instruction.

And why can’t this just be sped up by increasing the clock frequency? Zhislina explains that different stages of an instruction’s execution can take different amounts of time to execute. If the third tick in the image below is the longest step, there’s no benefit to shortening the clock tick length to shorter than the longest step that occurs in the instruction as the instruction would not be ready to be passed on.

You could speed up the clock tick, but then you would end up with a backlog due to the previous devices having to wait for the third (longer) section to be passed on.

Which then leads us to the only logical solution - shortening the longest step, which in this case is step three. If step three could be shortened, then the frequency could be increased the instructions will be executed at a faster rate. Unfortunately, there aren’t many ways to do this.

One is to reduce the physical size of components on a CPU, meaning shorter travel distances and decreased transistor switch times. This is achieved using smaller nanometer designs, often known as die shrinks. The instruction speed is therefore limited by the size of the smallest fabrication process available to the CPU manufacturer.

The other method is to try and divide the longest step up into smaller steps. If step three in the images above could be split into two, the clock frequency could be doubled and the instructions would be processed at twice the speed. This is an area which Intel’s architects consistently work on, complicated by the fact some steps can rely on other steps to be processed. This is known as process optimisation, and it is possible but also extremely difficult to achieve the required performance gains.

“In conclusion, the struggle for increased frequency is extremely challenging,” sums up Zhislina. “However, it is in progress, even though the frequency is increasing very slowly. But, take heart! Now that there are multicore processors, there is no reason why computers shouldn’t begin to work faster, whether due to higher frequency or because of parallel task execution. And with parallel task execution it provides even greater functionality and flexibility!”

While no doubt simplified somewhat to make it palatable to our simple brains, Khaslina’s explanation does go a long way to explaining to use just why high clock speeds aren’t the be all and end all of processor performance. Provided multithreading is used efficiently by software, increased core and logical core counts are the quickest and easiest method for faster processing, allowing more instruction to be processed simultaneously without worrying about step times.

Source: PC Gamer

Login or Register to join the debate

Rep
50
Offline
23:37 Feb-27-2018

also increasing the instruction set of any processor will prevent rapid clock speed growth, when you add IS's and lengthen a pipeline, you get branch mis prediction hits more often. Not that adding instruction sets to x86 hasn't been a good thing, look at GPU's for instance, both AMD and Nvidia's GPU tech are all SMID (GCN for RTG, CUDA for Nvidia) use a single instruction set, thus more pipeline stages can be added to increase the ability to clock without penalty

0
Rep
50
Offline
23:42 Feb-27-2018

adding more cores to x86 gets around the slow clockspeed growth required by the growing IS ecosystem while increasing throughput, still for GPU's, it is only a matter of time before 3ghz+ cards exist, and it will be soon, same thing with ARM and RISC V processors, they will be able to clock a lot higher by virtue of not getting hit with a mis prediction error for a longer pipeline. Presently Nvidia has been doing this with Pascal

0
Rep
386
Offline
admin approved badge
08:47 Feb-28-2018

Actually adding instructions makes each stage longer with more latency(wider) and the solution to this problem is to make the pipeline longer, but as you said making the pipeline really long will result in branch misprediction more often and will have a higher penalty at the same time, which so far has always lead to slower performance.


They should focus on reducing the CPI and transistor Count per execution unit instead, which will both allow for higher clock speeds and higher IPC and the IPC will be also gained by having more pipelines per core.

1
Rep
45
Offline
08:56 Feb-28-2018

Did you consider "stability" in you synopsis.

0
Rep
386
Offline
admin approved badge
09:18 Feb-28-2018

of course, that's why I said that they should focus on achieving that, NOT just that they should just do it. If they could just do it(more accurately if they reduce them further), they would have and to add more pipelines to a core, they need to just have a good process node, which is small enough, thus power efficient enough, thus can have even more transistors without overheating when cooled and without crashing, which everything after 32nm has NOT been.


But they have been working on it, I think from Ivy Bridge to Haswell, or was it from Haswell to Broadwell, the CPI for 256 bit AVX was reduced by 3 cycles.

0
Rep
16
Offline
16:06 Feb-26-2018

But does adding more cores also increase heat emissions on the CPU chip as well?

0
Rep
386
Offline
admin approved badge
17:33 Feb-26-2018

yes, but at a certain point, frequency starts consuming exponentially more power and thus generating more heat than adding more transistors.

0
Rep
36
Offline
14:46 Feb-25-2018

I don't mind more cores, the problem is we are struggling getting game develops to optimize to multiple cores instead of one.

3
Rep
45
Offline
18:08 Feb-25-2018

true

1
Rep
386
Offline
admin approved badge
18:14 Feb-25-2018

well if it was easy, everybody would do it and it's NOT, even now that they have added asynchronous functions/methods to most programming languages often times you end up with more overhead and thus less performance by using more cores/threads, due to the fact that most algorithms are linear.

0
Rep
29
Offline
09:48 Feb-25-2018

I feel like intel is lying but i dont know enough about CPU's to confirm

5
Rep
2
Offline
23:06 Feb-25-2018

aaaahhahaahha (tearz) : ) : ) : )

0
Rep
386
Offline
admin approved badge
04:27 Feb-26-2018

Well they are NOT lying as to why clock speeds haven't grown, they are just NOT mentioning that there are ways to increase them, but it's NOT worth the investment and everybody saw what happens when they try to brute force through clock speeds -> Pentium 4, it had roughly almost double the clock speeds of the competition and still lost.


Focusing on lower CPI(cycles per instruction), higher IPC(instructions per cycle), shorter and more optimized pipelines and improving the x86-64 ISA(again lower CPI) would be ideal for single core performance, instead of focusing on clock speeds and then more cores.

0
Rep
386
Offline
admin approved badge
04:29 Feb-26-2018

Also reducing the transistor count per execution unit, that can also lead to higher clock speeds.

0
Rep
76
Offline
admin approved badge
13:02 Feb-26-2018

Can't remember where I saw it, but I think pre-Ryzen release, AMD also was talking about it. And they also mentioned that you can have higher clocks, but it comes at the cost of IPC, meaning you either do less faster or more, but slower. Also if you try both, thermals and power requirement shoots through the roof.

0
Rep
76
Offline
admin approved badge
13:05 Feb-26-2018

If it is even possible for architecture. But in simpler terms, imagine it like carrying boxes. You can carry them fairly fast one by one, low IPC, high clock. Or you can take 5 and carry them slowly, high IPC, low clocks. So they have to make choice of where best middle ground is.

0
Rep
386
Offline
admin approved badge
15:49 Feb-26-2018

Or the third option, super high binning, super complicated architectures, super improved ISA and CPI, plus minimal transistors per execution unit and you get both higher clock speeds and higher IPC, with lower CPI and TPEU(transistors per execution units), but the damn things end up costing tens of thousands, basically you get IBM chips XD


And yes for consumer products it's best to have a balance in terms of IPC and clock speeds, but they should lower CPI and transistor count per EU as much as possible to improve them both.

0
Rep
10
Offline
01:03 Feb-25-2018
-1
Rep
16
Offline
10:32 Feb-25-2018

Benchmarking games on xeon processors... typical ignorance...

-3
Rep
94
Offline
11:08 Feb-25-2018

Well, for the actual benchmarks (not games), it's logical. splitting workload over cores takes time, thus, the dual core wins with a small marge. When he got to OpenGL, afaik, OpenGL really benefits of higher cpu clock over multiple cores. It's all comes down to the type of workload and I think that a 6 core at -3.5GHz would beat quad cores at -5GHz if the programs are optimized for multicore.

5
Rep
16
Offline
11:19 Feb-25-2018

Up vote, you are right, but that was not my argument. Xeon processors do not perform as they theoretically SHOULD in gaming, this is a thing to keep in mind. That is because they are not manufactured to handle those sort of workloads to begin with. Of course many may perform decently, but to perform such a test of checking core count vs frequency should consist of desktop processors to get decent results - NOT the xeon ones.

0
Rep
0
Offline
13:24 Feb-25-2018

That's a bold claim that xeons don't perform as well in games as their desktop counterparts. Every benchmark I've ever seen or done clearly confirms that they perform exactly the same as long as architecture, Hz and core count are equal.

1
Rep
0
Offline
13:31 Feb-25-2018

Also usually Xeons are just higher binned chips with some additional features. There's nothing that could make them perform worse. I thought that myth was debunked ages ago.

0
Rep
16
Offline
14:03 Feb-25-2018
-1
Rep
94
Offline
10:29 Feb-26-2018

Well, if you would clock an i5 or i7 at that same core speed, they will probably get equal results.

2
Rep
386
Offline
admin approved badge
12:05 Feb-26-2018

There are two types of xeons, the ones that are basically an i5/i7 with iGPU disabled, much higher bin and a few more instructions/features and the server ones that are based on the same architecture but have many more cores that are lower clocked with much more cache and quite a bit more features as well.


The xeon x3450 is a higher binned i7 860 with a few more instructions for workloads, but I forgot which exactly.

0
Rep
45
Offline
10:47 Feb-25-2018

@doggamer69 did you understand the difference between the video and the article? if so please explain.

0
Rep
10
Offline
15:26 Feb-25-2018

Sumed up “In conclusion, the struggle for increased frequency is extremely challenging,”... I mind gaming, thus, the mutli-tasking doesn't take any place on it as long as i use to dismiss my tasks as to gaining more performance in game. My PC has 1 job to do! And it requires clock speed. regardless of heat emission

0
Rep
45
Offline
23:57 Feb-25-2018

Let me give you a riddle -- why would I pair my processor with a 1080ti? Heat does matter why because it affects the processors life span. Scheduling is not easy to implement!!

1
Rep
10
Offline
16:00 Feb-25-2018

I sum it up the struggle for not developing games optimization to multi-cores is extremely challenging. That video shows that the caption is misleading

0
Rep
4
Offline
00:53 Feb-25-2018

Nice informative post

1
Rep
35
Offline
23:34 Feb-24-2018

Great post. I may not be able to afford myself better Intel cpu anytime soon but its not bad to keep up on info, at least.

2
Rep
40
Offline
21:27 Feb-24-2018

I think you made a mistake on the article. Is it Zhislina or Khaslina? are these two different people?

2
Rep
383
Offline
senior admin badge
09:17 Feb-26-2018

Ah yes, thank you, fixed it now. It's Zhislina :)

0
Rep
45
Offline
17:47 Feb-24-2018

Please let me know when intel releases a (8core,16thread) processor & it is not an oven. So I can upgrade.

4
Rep
164
Offline
19:10 Feb-24-2018

buddy i heard rumors that intel core i7 9700k will be 8 core 16 threads

3
Rep
83
Offline
19:37 Feb-24-2018

yip end of this year is rumor for that, its what im waiting/saving for

1
Rep
76
Offline
admin approved badge
22:22 Feb-24-2018

From what I heard, it is only matter of time. Intel is already working on next chipset, that will have support for 8 physical cores, Z370 doesn't have that yet. And I am pretty sure they will be working on it. But I have a feeling Intel feels Coffee Lake is enough answer to Ryzen, so maybe when Ryzen 2 comes.

0
Rep
76
Offline
admin approved badge
22:23 Feb-24-2018

I mean Ryzen did force them to rush things, Kirby Lake and Coffee Lake came out earlier than expected. KL probably to clear development time for CL. And CL launch was really fast and unprepared one. I feel they will take time to do next one right, since they aren't in a hurry anymore.

1
Rep
4
Offline
admin approved badge
03:16 Feb-25-2018

let me know when its in a 35w laptop package

0
Rep
0
Offline
10:11 Feb-25-2018

Intel's 8 core 16 threads CPUs are ovens? Pretty much all 8 core intel CPUs run quite cool. You can easily use and even slightly overclock them with something like a Hyper 212 EVO.

0
Rep
23
Offline
17:17 Feb-24-2018

What about multiple processor chips at once? Like we can put multiple graphic cards at once .. could this be done at all? Just a question, I don't know myself :D

0
Rep
164
Offline
17:34 Feb-24-2018

dell use multi cpus in their systems like dell precision t7920.
you can see on their sites

1
Rep
94
Offline
10:08 Feb-25-2018

For now, only server boards support up to quad cpu, it doesn't increase your overall performance, in fact, it even slows down your boot because of the cpu initialization. Multi cpu is probably only good for server related workload or cpu workload (and I believe that the programs must support it as well to benefit from it).

1
Rep
6
Offline
11:34 Feb-25-2018

It could be done, but it will be an enormous processor.
If the cores get very small it will be possible to do.Like the Q6600 (multi-chip module)

0
Rep
1,041
Offline
senior admin badge
16:49 Feb-24-2018

struggle is the operating system which is slowing down passing data from software to hardware

2
Rep
386
Offline
admin approved badge
17:04 Feb-24-2018

The biggest struggle is the backward compatible ISA if you ask me.
x86-64 needs to be remade into an efficient RISC ISA for modern tasks. That would kill them sadly as no software would support it right of the gate.

2
Rep
83
Offline
16:21 Feb-24-2018

I do prefer seeing a base clock of 4ghz or higher (not the turbo), i know the i7 8700k is better for alot of games than my cpu but its at a base clock of 3.7ghz and for some reason that puts me off abit, still saving for a intel cpu at the end of the year which might be the 8core

0
Rep
386
Offline
admin approved badge
17:02 Feb-24-2018

That means that their marketing works on you XD


Keep in mind that the i7 8700k runs at 4.3-4.5Ghz on ALL cores at stock clock speeds as long as you do NOT use 512 bit FP AVX instructions(barely any work or vendor software uses them, games do NOT even use 256 bit AVX), yours runs at 4.0-4.1Ghz on all cores when all cores are being used at stock clock speeds.

0
Rep
164
Offline
17:13 Feb-24-2018

what is 512 bit FP AVX ?

0
Rep
386
Offline
admin approved badge
14:08 Feb-25-2018

this is an instruction with AVX standing for Advanced Vector Extension(Should be called AVE really) and is a Single Instruction Multiple Data instruction that is used for improving data parallelism and at least according to intel thread parallelism(no idea how it improves thread parallelism, nobody seems to do really, might be just marketing) and it does improve performance significantly, but the problem is that no compiler currently generates AVX code be it 256 or 512 bit, so programmers have to manually code for it to be used, so very few software uses it/them and games just do NOT use AVX.

1
Rep
386
Offline
admin approved badge
14:12 Feb-25-2018

Software that uses it would be Adobe software(they can make use of AVX, but can't write their software to be at least somewhat scaling with more cores/threads, go figure), Microsoft Office, Archive programs like Winrar and 7Zip, different movie and picture codecs, cinebench and actually mozilla, that's why it runs so much faster with newer CPUs(sandy bridge and newer) and a few numeric libraries which require vectors for their calculations and just every software that has vector calculations can and will benefit from AVX, but until we get compilers that generate natively AVX code it's almost useless for the majority of people.

1
Rep
386
Offline
admin approved badge
14:16 Feb-25-2018

Now since intel invented and patented/licensed or whatever AVX, amd can NOT design straight AVX, so they always have to do a workaround, until they can just straight up design, with bulldozer they made 2x 128 bit FPUs fuse into 1x AVX 256 bit, no support for 512 bit, which made it extremely slow to do, that's why in rendering software that did NOT use AVX like 3Ds max the fx CPUs were close to intel counterparts at the time, while in software that did use AVX they were miles behind, especially after haswell when intel reduced the CPI(Cycles per instructions) of AVX.

1
Rep
386
Offline
admin approved badge
14:18 Feb-25-2018

Ryzen achieves 256 bit AVX the same way as bulldozer, by having 4x 128 bit FP units per core, fuse their operations, but this time the penalty is minimal, while intel still holds everything 512-bit AVX, so AMD can NOT even put AVX 512 bit into their CPUs, even though they should technically be able to and Ryzen does NOT support 512 bit AVX and has to do multiple 256 bit AVX for the same work as a single 512 bit AVX, which quite slower on its own, let alone multiple.

1
Rep
0
Offline
10:15 Feb-25-2018

Why is the base clock relevant to you? With adequate cooling pretty much all modern CPUs will run close to their max turbo speeds on all cores with most workloads. And you can easily overclock most 8700ks to a stable 5 Ghz or more

0
Rep
16
Offline
16:03 Feb-24-2018

This is a really good article! I've really enjoyed reading this !

5
Rep
164
Offline
14:59 Feb-24-2018

intel can even make 100Ghz processors if people want
but they make what they want

-27
Rep
11
Offline
15:22 Feb-24-2018

Uh did you read the article?

1
Rep
164
Offline
15:46 Feb-24-2018

yes i read nothing was surprising for me.


you think people believe every thing Intel says


biggest factor is that mostly pc games run better on higher core than core count


i mean even core i7 6700k 4.2ghz beating core i7 6950x 3.0ghz in most of games


i donot know much how processors work but i know higher core clock also gives better performance

-4
Rep
16
Offline
16:06 Feb-24-2018

Bro, you've said yourself - you do not know much about processors... how can you claim higher clock speeds always give better performance

2
Rep
6
Offline
16:08 Feb-24-2018

Not true, go look up the Pentium 4 HT or D 3.8ghz vs the Core 2 Duo and Atholon x64 even at just 2ghz, or more recently FX-9590 5ghz vs i7-4770k 3.9ghz. Despite there age compared to our more "modern" cpus, the fact that BETTER optimized cores will ALWAYS trump higher speeds has been a fact for years. Also games aren't usually core count sensitive as most never use past 2-4 "optimally" anyway.

3
Rep
16
Offline
16:09 Feb-24-2018

Your claim comes invalid even as you transition from the pentium series to the core 2 duo, let alone the latest processors. Even the hyper threaded pentium 4 clocked at 4 GHz was trampled upon by the measly clock of 1.86 GHz of the core 2 duo e6300.
Don't you think that would be a revelation enough from that point onwards?

4
Rep
5
Offline
16:11 Feb-24-2018

Maybe read the science-y part of the article before embarrassing yourself?

2
Rep
164
Offline
16:12 Feb-24-2018

i was also going to give the example of core i7 3770k and fx 9590


but we should keep in mind that most games are made on Intel processors so they work better on intel

-4
Rep
164
Offline
16:15 Feb-24-2018

i am not embarrassing my self i am trying to learn from people like you>..

-2
Rep
34
Offline
01:34 Feb-25-2018

Going in acting like you know way more than you actually do is an interesting way of showing it.

1
Rep
16
Offline
16:16 Feb-24-2018

The games' optimization does not by ANY POSSIBLE MEANS any implication on which chip they were made. It's the engine the developers use to make those games, the way the games are coded themselves and often tweaks made after testing across different families of processors(aka patches) that count towards their optimization

1
Rep
164
Offline
16:21 Feb-24-2018

no lot of games run better on intel and lot of on AMD
it is exactly like some games run better on Nvidia and some on AMD

-1
Rep
16
Offline
16:27 Feb-24-2018

To be blunt, from an absolute enthusiast's point of view, intel chips are often more powerful than amd when it comes to gaming. But you cannot just simply base your assumptions on that. AMD processors often similarly beat intel when it comes to many multi-core demanding productivity applications. It's the type of workload that often affects the cpu output.

1
Rep
16
Offline
16:28 Feb-24-2018

According to your logic then, most of the productivity applications should have been made on amd processors and that is not just wrong - it even sounds wrong.

1
Rep
164
Offline
16:34 Feb-24-2018

i am not saying that applications should be made on intel or AMD
but i think there should not be any type of bias that some softwares are made for intel and some on AMD

0
Rep
16
Offline
16:57 Feb-24-2018

And I am saying that the performance 'bias' is not because of which platform the games themselves are made on. It's the processor's performance that how well it handles the particular application. The 'bias' is only created when the application itself is tested on only one platform(i.e. either only intel or only amd), and that the developers would never overlook because they do not want their market to be narrowed, they would want both platform users to buy their product.

0
Rep
16
Offline
16:02 Feb-24-2018

How are you this much upvoted if you say things like these...?

2
Rep
164
Offline
16:09 Feb-24-2018

i give thumbs up to every person whether i like or not.
your reputation does not means you know every thing.


lot of people have negative reputation and they know the best

0
Rep
164
Offline
16:18 Feb-24-2018

i didnot say higher clock always give better performance buddy
some games run better on more cores but some on higher cores

0
Rep
16
Offline
16:23 Feb-24-2018

Yes but there's a limit to the demanded clock speed. The type of processors being made can only be improved upon if their working efficiency is increased instead of clock speeds. Take an old choppy game for example, right from the 2000's. If your run it across- lets say a middle class chip - then across some beastly 8700k, will there be a difference? No. This is just a simple example as to why clockspeeds are simply not being increased as of late.

0
Rep
16
Offline
16:24 Feb-24-2018

Modern AAA titles require more cpu cores rather than more clockspeeds

0
Rep
164
Offline
16:29 Feb-24-2018

yes but the whole world doesnot play games
there are lot of softwares that run better on higher clock speeds


we cannot just simply use games for comparing processors?

0
Rep
16
Offline
16:59 Feb-24-2018

That, i dealt with in the upper comment section.

0
Rep
164
Offline
17:08 Feb-24-2018

thanks buddy learned some thing from you
i am not afraid of my reputation.
only fools or weak people cry for thumbsdown

4
Rep
16
Offline
17:11 Feb-24-2018

And i admire that you tend to accept and learn rather than being troublesome like many people out there. Rep never matters as long as ur learning something one way or another :)

1
Rep
386
Offline
admin approved badge
16:58 Feb-24-2018

Of course they can make a 100Ghz chip if they cut most of the execution units and lower the transistor count by well probably around 90% or more, at that point the chip will have little to no functions and will still heat like an oven, it won't be x86 anymore... XD


Instead of higher clock speeds, they need to focus on optimizing x86 so that it takes fewer cycles per instruction(lower CPI) and make their chips have more instructions per cycle(higher IPC).


And I love how they call it superscalar, but what they have right now is NOT true superscalar, this is stacked scalar.

5
Rep
164
Offline
17:05 Feb-24-2018

i agree with you but ever thing depends on technology and time.

1
Rep
34
Offline
01:55 Feb-25-2018

Google the Northwood and Prescott Pentium 4s for what happens when you go for clock speed above all else. The thermals of those things were so bad that they killed the Pentium brand, and clock speed won't change much short of finding something else to build CPUs from


Clock speed isn't the only thing determining the power of a CPU, and the laws of thermodynamics are not an Intel conspiracy.

0
Rep
94
Offline
10:12 Feb-25-2018

"i mean even core i7 6700k 4.2ghz beating core i7 6950x 3.0ghz in most of games"


Fyi, games mostly run on a little amount of cores, because most systems over the world only have 2 or 4 cores. Besides that, OpenGL benefits from a higher cpu speed over multiple cores. It's the type of workload you put on them. Besides that, those extreme lineup isn't really meant for gaming, but it's very good for video editing and multi core workload.

1
Rep
386
Offline
admin approved badge
14:20 Feb-25-2018

And because programming for multiple cores and threads is still hard as hell, even with the newest additions for asynchronous functions/methods in the programming languages.

0
Rep
94
Offline
14:56 Feb-24-2018

As an electrical engineer (Student) it sounds true, with logic. Besides that, increasing a frequency of a digital signal will eventually look like a sine (if the rise times can't become shorter) and thus a higher chance of failing/corrupt data. Multi core is the solution, devs just have to make use of them

6
Rep
58
Offline
admin approved badge
14:53 Feb-24-2018

They should be able to hit 5Ghz...

2
Rep
386
Offline
admin approved badge
16:52 Feb-24-2018

Absolutely, if they make the pipeline longer and remove 1/4th of the execution units... but this will result in worse performance than a current CPU clocked at 4.0Ghz, due to longer pipelines being more detrimental to performance due to more frequent stalls and flushes and when it stalls the longer the pipeline the more cycles are needed to refill it. And then there is the power consumption, thus heat scaling with transistor count and clock speeds and clock speeds require more power at a certain point than adding extra transistors, a lot more power.

2
Rep
386
Offline
admin approved badge
16:52 Feb-24-2018

Now, of course with PERFECT and IDEAL die shrink it should be possible no problem, but we do NOT live in a perfect world now do we and 14nm, the failed 20nm, 22nm and 28nm proved it.

2
Rep
28
Offline
23:35 Feb-24-2018

The AMD FX 9590 can reach 5ghz and it is an absolute beast.

0
Rep
386
Offline
admin approved badge
14:21 Feb-25-2018

Maybe as hungry as a beast, but NOT a beast XD
And most fx 8320 can reach 4.9-5.1Ghz no problem as well. XD

0
Rep
213
Offline
admin badge
16:36 Feb-27-2018

ughmm wizzgamer I thought the same thing when I was shopping for used components until I did some comparing I don't know how they bench like this but check this out ryxen 1200 vs 9590 http://cpu.userbenchmark.com/Compare/AMD-Ryzen-3-1200-vs-AMD-FX-9590/3931vs1812

0
Rep
24
Offline
14:08 Feb-24-2018

Really intel? I have 4ghz and in full load 45C (wraith cooler stock)

-2
Rep
14
Offline
16:30 Feb-24-2018

that cant be right

2
Rep
386
Offline
admin approved badge
14:23 Feb-25-2018

Yes, it can, FX CPUs run really cool. Due to the low transistor density and big surface area, plus they are soldered to the heat spreader. Cooling them is easy.


Intel CPUs run hot, NOT because they consume a lot of power, which they do NOT, but because you can NOT cool them due to poor/low heat dissipation.

0

Can They Run... |

| 60FPS, High, 720p
Core i5-10400F 6-Core 2.90GHz GeForce GTX 1050 Ti Gigabyte D5 4GB 16GB
| 60FPS, Medium, 1080p
Ryzen R5 1600 Radeon R9 280 Club3D royalKing 3GB Edition 8GB
| 60FPS, High, 1080p
Core i7-11800H 8-Core 1.90GHz GeForce RTX 3060 Mobile 16GB
50% Yes [2 votes]
| 60FPS, High, 1080p
Core i7-11800H 8-Core 1.90GHz GeForce RTX 3060 Mobile 16GB
100% Yes [1 votes]
| 60FPS, High, 1080p
Core i7-11800H 8-Core 1.90GHz GeForce RTX 3060 Mobile 16GB
100% Yes [1 votes]
| 60FPS, High, 4k
Core i5-11600 6-Core 2.8GHz GeForce RTX 3060 Ultra 16GB
| 30FPS, Low, 1080p
Core 2 Duo E8400 3.0GHz GeForce GTS 450 Asus DirectCU TOP 1GB Edition 4GB
0% No [1 votes]
| 60FPS, Medium, 1080p
Core i5-8250U 4-Core 1.6GHz GeForce GTX 1050 MSI 2GB 16GB
| 60FPS, Low, 1440p
Core i5-9400 6-Core 2.9GHz GeForce GTX 1660 Ti Gainward Ghost 6GB 4GB
0% No [1 votes]
| 60FPS, Ultra, 1440p
Ryzen 7 5800H 8-Core 3.2GHz GeForce RTX 3060 Mobile 16GB
100% Yes [1 votes]
| 60FPS, Ultra, 1440p
Core i5-8250U 4-Core 1.6GHz GeForce GTX 1050M 8GB
0% No [3 votes]
| 60FPS, Low,
Core i5-7400 3.0GHz GeForce GTX 1050 Ti 4GB 16GB
100% Yes [1 votes]
| 60FPS, High, 1440p
Ryzen 5 3600XT 6-Core 3.8GHz Radeon RX 5700 XT Asus ROG Strix OC 8GB 32GB
100% Yes [1 votes]
| 60FPS, High, 1080p
Core i7-4820K 4-Core 3.70GHz Radeon RX 570 4GB 16GB
| 60FPS, High, 1440p
Core i7-4820K 4-Core 3.70GHz Radeon RX 570 4GB 16GB
| 60FPS, Ultra, 1440p
Core i7-11700F 8-Core 2.5GHz GeForce RTX 3070 Gigabyte Gaming OC 8GB 16GB
| 60FPS, High, 1080p
Core i7-4820K 4-Core 3.70GHz Radeon RX 570 4GB 16GB
100% Yes [2 votes]