Intel demos 'Nehalem' chips clocked to 3.2GHz
IDF Intel's next-generation processor, 'Nehalem', will be made available running at 3.2GHz - if demos of the chip at Intel Developer Forum this week are anything to go by.
Officially, the chip giant won't comment on the clock frequencies it will release the initial desktop and sever Nehalems - codenamed 'Bloomfield' and 'Gainestown', respectively - when they ship, an event scheduled for Q4 this year.
However, demo machines equipped with the new CPUs at IDF in Shanghai reveal the chip running at 3.2GHz.
Today, that's the highest speed at which Intel offers a desktop processor, in the form of the 45nm Core 2 Extremes QX9770 and QX9775. The top-of-the-range Core 2 Quad, the Q9550, runs at 2.83GHz, while the dual-core Core 2 Duo E8500 is cloked to 3.16GHz.
Nehalem builds on the Core architecture with a native quad-core design; extra, shared L3 cache; and HyperThreading technology to allow each of the four cores to appear as two virtual cores to the host operating system.
While HT doesn't double the performance of a processor, it nonetheless should ensure that Nehalem outperforms a four-core Core 2 at the same clock speed.
What we can't say is how much heat the CPU generates at 3.2GHz. Again, Intel hasn't made public its thermal design parameters for Nehalem, but on past form, it's likely to want to match the TDP of its current quad-core chips and quite possibly undercut them in a bid to demonstrate a higher performance-per-Watt rating for the new part.
In about 15 years programming and sysadmin-ing I have never heard of 'superthreading'.
Traditional threaded software allows each thread to do what ever the programmer codes the thread to do: is assumes that any given thread runs on a general purpose processor and any given pair of threads do not have a relationship qua the instructions they send to their CPU modulo sync of data access. Which means if several threas want to all do floating point ops that is fine; or if they want to do all integer ops that is fine too; if they want to do some mixture that is fine also.
HT was (may be again?) a way to exploit that a P4+ CPU has more execution units than can strictly speaking be used simultaneously by single threaded code. Thus, if a programmer or compiler could conspire to issue instructions that could execute (1) on the FP unit and (2) the integer or load/store unit, then there was no reason not to send both to the processor.
But HT failed if the two above instructions had a dependency between them. HT actually had a long list of what the two instructions could not admit in terms of there relation to each other. SO HT was not threading. HT was crafted code, in much the same manner as programming CUDA or CELL is: if your algorithm fit the model then you were in business... but the model was quite restrictive, one did not just pass the compiler a switch telling it to produce HT code and magically get two threads of execution. One really had to structure one's code to meet HT's requirements regardless of the compiler option.
Few people did it: the people that just passed the compiler the option got noothing in particular for their effort: people that spent the time to understand HT model and their code got results directly related to whether thir algo's could be HT-ized. It was not a general fix, it was not even anything like an easy fix.
As for statements like "Superthreading requires all execution instructions be from the same thread", well my friend, that does not even make sense. Who is signing that cheque you get each quarter?
I have been a fan of the 486DX2, the cx686-p166+, the AMD 586 & 686, the Pentium II, the Pentium III, the Athlon, the XEON, the Athlon 64, the Opteron, the XEON in that order.
Lately I am enamoured of those 45nm 1333MHz FSB XEONs.
I guess that makes me a Cyrix fanboy.
Marketing - HT in this case - is marketing, and should be identified as such whether it is Intel or AMD foisting it in the public.
Phenom B3 cant even beat old Q6600, expect more from Nehalem.
we all know what current intel processors can do and how it kicks the butts of current AMD processors, including the new Phenoms who despite TLB bug fix in the B3 revision, the supposedly better hypertransport 3.0, the integrated memory controller advantage of AMD, etc... Phenom 9850 still cannot beat the good old Q6600 kentsfield in overall performance. and to remind you guys, the Q6600 is not even a Nehalem.
see Phenom, part deux: Ars reviews AMD's B3 silicon revision
now that Nehalem is coming, expect greater performance since Nehalems will come with 3-channel integrated memory controller and a hypertansport-like bus which is the QuickPath Interconnect, etc.
more info on Nehalem in this Hardware Secrets article:
I think you are a little confused, what you are talking about it superthreading, not hyperthreading. Superthreading requires all execution instructions be from the same thread, where as hyperthreading extends superthreading and allows it to execute instructions from 2 different threads.
You can read up about it at http://arstechnica.com/articles/paedia/cpu/hyperthreading.ars/3 if you like.
Oregon - OR uh gun
Willamette - Wil LAM met
Tualatin - Too ALL uh ton
Nehalem - Neh HAY lem
There you go, everyone.
And all fanboyisms aside, hyperthreading (like SMT, cache hits, MMX, etc.) will live and die based on the compiler. I don't know how you'd effectively write C++ (Java, C, C#) to manage the instruction code order of your packaged software. So Intel: help out Microsoft, Apple and the FOSS community with those compiler options, why don't you? I bet if you smartened up those compilers so they were exploiting all the tricks you packed in your CPUs, and we re-compiled the base OSs, drivers and MS Office, you'd speed up both my dual Xeon Alienware w/Vista (1st edition HT!) and my dual-core MacBook Pro both considerably!