All well and good, but so what?
All this techy-techy yumminess is good geeky fun, but what does it all add up to? How much improvement are we to expect out of the Tri-Gate process upgrade?
Quite a bit, if Intel – and Bohr – are to be believed.
The 22nm Tri-Gate transistors will, according to Bohr, provide much-improved performance at low voltages. The examples he provided were based on gate delay as one variable, which in a transistor increases as power decreases. "When you operate [any transistor] at a lower votage, it tends to slow down. Think of gate delay as the inverse of frequency: higher gate delay means slower frequency," he said.
Much faster at far less power – what's not to like?
"In this example," he explained while displaying the slide above, "at 0.7 volts they're about 37 per cent faster than today's 32nm planar transistors." And to cut naysayers off at the pass, he added: "And I want to emphasize that this 32-nanometer curve – that's not just some dummy straw man, those are the fastest planar transistors in the industry today," adding with a trace of pride: "That's Intel's technology."
What happens, though, when you operate a 32nm transistor and a 22nm Tri-Gate transistor at the same gate delay? You'd assume lower power consumption for the Tri-Gate, right? But how much lower? A glance at Bohr's Gate Delay slide shows that when the 32nm planar and 22nm Tri-Gate transistors are both operating at a gate delay normalized to 1.0, the power savings are considerable.
"If you operate them at the same gate delay, the same frequency," he said, refering to the two transistor types, "to get the same performance from [Tri-Gate] as planar, you can do so at about two-tenths of a volt lower voltage – in other words, at 0.8 volts instead of 1 volt. That voltage reduction, combined with the capacitance reduction that comes from a smaller transistor, provides more than a 50 per cent active-power reduction."
Then, in Wednesday's understatement, Bohr concluded: "And that's very important."
To sum up: at the same voltage, 37 per cent faster performance. At the same clock frequency – inferred from gate delay – a 50 per cent reduction in power.
Not too shabby, if true. Remember, though, that these aren't benchmark figures derived by independent testing, they're numbers taken from slides presented by an Intel senior fellow at the rollout of his baby.
Your mileage may vary.
Still, Tri-Gate may allow Intel to both maintain IA's lead in the server space while lowering cooling and power costs, and find it a new home in the low-power consumer space.
If, however, Tri-Gate doesn't perform as promised, Intel may have blown its last chance of entering the lucrative, growing, and ultimately consumer-consuming low-power mobile market.
Or, for that matter, Intel could license the ARM architecture and start buiding its own ARM variants in its own fabs, using its 22nm Tri-Gate process. That's unlikely, but stranger things have happened – such as Intel, once the seemingly unchallengeable PC overlord, being threatened in the consumer-PC market by Windows-running, 40-bit addressing, multicore ARM Cortex-A15 chips. ®
Intel's Tri-Gate gamble: It's now or never
Re: Interesting thought...
No it wouldn't. Merging with someone like AMD would kill ARM.
AMD licensing ARM designs would be more interesting, and any of ARM's current licensees moving to the same 22nm Tri-Gate process would effectively kill Intel's chance of ever gaining any momentum in the high-performance, low-power market.
It's about time ARM made significant inroads into the desktop market. Intel have been making our (programmer's) lives hell for long enough.
"Or, for that matter, Intel could license the ARM architecture and start buiding its own ARM variants in its own fabs, using its 22nm Tri-Gate process"
I'd have thought, more like a dead cert, than unlikely.
If Intel has the best process technology for low-power devices, ARM without question has a better CPU architecture for low-power devices like Smartphones. Put them together and what do you get? The best possible Smartphone CPU, that can either double battery runtime, or allow for a large cut in the weight of the phone without any loss of runtime.
If Intel suffers from the "not invented here" syndrome, Smartphone manufacturers will have to choose between i86 architecture running on the best Silicon, or ARM running on less good silicon. It won't be so long before TSMC or some other chip foundry catches up with Intel enough to put ARM back at the front of the pack. Best for Intel if it's Intel that makes the best mobile device chips.
Whatever Intel can do with FinFET then ARM can also do
@"The move to a 22nm Tri-Gate process architecture is an important step for Intel's entire microprocessor line"
It is important because once the ARM A15 design is here, Intel will start to loose the future server market on processing power per watt. (The ARM A15 will allow the design of servers with more processing power for less electrical power than an Intel CPU based design. That's win win for ARM and Checkmate for Intel's bloated x86 design).
Intel's market lead and dominance up until now has largely depended on Intel's ability to define what each new generation of x86 design should be, so they were always first to market with each new x86 generation. That meant each time the x86 design changed, AMD had to spend time playing catch up to add each new addition to the ever more bloated and increasingly complex x86 design. This constantly changing x86 design gave Intel the marketing lead over AMD, because Intel were always going to be first to market with each new generation.
Unfortunately for Intel, ARM are not playing that same game. ARM processor designs are more power efficient than x86 designs. So whatever chip making process Intel uses, they still can't win, as an ARM design based on that same chip making process will win over the x86 design. Ironically for Intel, their bloated x86 design that was so useful for holding back AMD, is now holding them back from competing with ARM.
Which leaves software (not hardware) as Intel's remaining x86 strength and even that is just in its legacy support. Its a pain to recompile programs and some old programs won't be recompiled (the companies who created the code may not even be around any more). But for all new code, its really not an issue. Plus ARM has a lot of software support. For example Linux has been on ARM for years. Android and iPhone already support ARM. Even Microsoft are looking to support ARM. Also ARM software development has had 28 years to evolve to a very high level of industry support with good free tools. So ARM is strong in software support as well as beating the x86 design in processing power per watt.
So Intel are in trouble. AMD isn't Intel's biggest competitor, its the over 200 companies that all license the ARM designs that are Intel's biggest threat, because together these over 200 companies could seriously harm Intel's market dominance. So Intel really are in trouble.
The article made even someone as thick as me think that they understand the concepts; thanks.
"The answer- to blow sugar up your ass."
I'm not sure that's all it's cracked up to be.