Deep, deep dive inside Intel's next-generation processor
Join us on a whirlwind Haswell holiday – non-geeks heartily welcomed
Although Haswell's compute cores are based on those used in the now-familiar 32nm Sandy Bridge and its follow-on 22nm Ivy Bridge, Intel has added a number of changes that should improve performance.
"The first thing to look at on the performance side," said Intel engineer Singhal, "is 'What are we doing for the software that exists today?'" One important factor in this effort, he said, was to keep all the compute-core pipelines essentially the same as they have been through Sandy Bridge and Ivy Bridge.
Intel has, however, made what he characterized as "significant" changes within the cores, including increasing the depth of the core's buffers, which gives the cores more flexibility in Haswell's out-of-order execution – the chip's ability to better optimize the flow of instructions and data, which also helps use more parallelism in execution.
Haswell has also improved branch prediction – the chip's attempt to accurately guess the correct path for data flow before it actually knows which way it will proceed in if-then-else processing. The more accurate a chip's branch prediction, the less frequently it will need to start a branch over it if guesses wrong, which wastes time.
And wasting time, in contemporary chip engineering, is worse than wasting money – it's wasting power.
According to Singhal, branch-prediction improvement is "something we tend do every generation." Additionally, the fact that the execution pipelines haven't been lengthened from the ones in Sandy Bridge and Ivy Bridge assures that the "do over" time for an incorrect branch prediction is also not lengthened.
Haswell may be based on previous architectures, but it has plenty of new tricks up its silicon sleeves (click to enlarge)
Instruction buffers at the top end of Haswell's pipeline have been enlarged, which Singhal says will help the performance of apps that have a large code footprint, "which we're seeing become more and more common." They'll also help to improve Haswell chips' chances of initiating efficient code-execution parallelism – more on that in a moment, as well.
Sandy Bridge and Ivy Bridge chips can execute six operations per clock cycle; Haswell increases that number to eight. One of the "ports" for those new execution units now supports an additional integer arithmetic logic unit (ALU), which – you guessed it – provides another place to accomplish integer arithmetic or logical operations.
A new address-storing port has been added, as well, which will free up some of that duty from two existing ports.
Two new ports – no waiting – plus the efficient fusing of multiply and add functions (click to enlarge)
One key newbie to Haswell's microarchitecture is the addition of two fused multiply-add (FMA) floating-point operation units. One slick thing about FMA is that since it can compute both a floating-point add and multiply in the same cycle, doing so not only saves clock cycles but also creates the opportunity to round the result once, and not twice as required if the operations were done separately. This single-rounding capability improves mathematical precision.
In addition to these improvements over the existing Sandy Bridge and Ivy Bridge architectures, Haswell also introduces some new technologies and capabilities. Some of these new goodies, however, will require some software changes. "Ideally," Singhal said, "just a recompile, but in other cases a re-optimization of the code."
Is it just us, or do ports 0, 1, and 5 seem rather more well-rounded in their abilities? (click to enlarge)
One of these new capabilities is a set of compute instructions called AVX2, a beefing up of the AVX (advanced vector extensions) instructions introduced in Sandy Bridge – and no, Haswell's AVX2 has nothing to do with that other AVX2 that you content-creation types may know of from Avid.
Simply put, AVX – and now AVX2 – instructions are extensions to Intel's long-established SSE instructions that can operate on multiple data words in parallel, using a single instruction.
Here's a geek-cred tidbit: SSE is a bit of a recursive acronym – or initialism, for you purists among us – that stands for "streaming SIMD extensions"; SIMD stands for "single instruction, multiple data." (SIMD, by the way, is a true acronym, pronounced "sim-dee".) Drop that into conversation at your next cocktail party and watch members of the opposite sex swoon at your erudition. Or of your own sex, should that be your preference. Whatever.
What you need to know about AVX2 is that it will allow clever coders to speed up applications that are floating-point intensive by using FMA to double both single-precision and double-precision floating point operations per clock cycle per core. According to Intel, expect noticeable speed-ups in such applications as image and audio/video processing, scientific simulations, financial analytics, and 3D modeling and analysis.
If you're an old Intel hand, you'll appreciate the vast improvement of AVX2 over 1996's Pentium with MMX (click to enlarge)
AVX2 improvements don't stop with the FMA-enabled doubling, however. There are also new instructions for data permutation and shuffling, and a new gather instruction allows you to load data from multiple non-contiguous places in memory, reducing latency and freeing you from having to perform pesky hand-coded data bookkeeping.
Finally, AVX2 goes beyond AVX in that it can also handle integer instructions, not just vector instruction. Your humble Reg hack is not completely clear as to exactly why that's so nifty – although I could venture a semi-educated guess or three – but Singhal's IDF audience seemed duly impressed.
Sponsored: 2016 Cyberthreat defense report