Feeds

Deep, deep dive inside Intel's next-generation processor

Join us on a whirlwind Haswell holiday – non-geeks heartily welcomed

  • alert
  • submit to reddit

Secure remote control for conventional and virtual desktops

Caching in, big-time

It's all well and good to have increased computing capabilities, but as Singhal said, "In order to actually use those functional units, you have to be able to feed them, you have to be able to get data to them." And that's where Haswell's improved caches come into play.

It's not that Haswell's Level 1 and Level 2 caches are any larger than those of Sandy Bridge and Ivy Bridge – they're not. What has been improved is the chip's ability to get data into and out of those caches more quickly.

Slide from Intel Developers Forum 2012 providing details of Intel's 4th Generation Core Processor, codenamed 'Haswell'

You gotta like the improvement curve since 2008's Nehalem. Ah, how the time does fly (click to enlarge)

As with Sandy Bridge and Ivy Bridge, Haswell's L1 instruction and data caches remain at 32KB, and its L2 cache remains at 256KB. What's different is the performance that Haswell delivers from those caches by doubling the read and write ports to 32 bits and the load ports to 256 bits – that's a lot of bits being made available to the execution units per cycle.

This is a boon to AVX2: Haswell can sustain two full AVX2 reads and a full AVX2 write in a clock cycle, according to Singhal.

In addition to these bandwidth improvements, Singhal pointed to Haswell's improved cache-line latency and the elimination of cache conflicts in the new microarchitecture. "Previously there were cases when two loads may not be able to read the cache simultaneously because of the banking architecture that we used," he said. "We've removed those restrictions."

The take-away? Haswell can get more data to its execution units without those workhorses having to waste time and power twiddling their digital thumbs. The result? Better performance and better power savings.

Slide from Intel Developers Forum 2012 providing details of Intel's 4th Generation Core Processor, codenamed 'Haswell'

A thick pile o' numbers, to be sure, but if you care about cache performance, we wager you'll like what you see (click to enlarge)

The L1 cache, as described above, was not the only beneficiary of Intel engineering love. The L2 cache was improved, as well. In Sandy Bridge and Ivy Bridge, a line could be read from the cache every other clock cycle; now it takes just one clock cycle to do the same.

Before we step out of Haswell's compute cores and sidle on over to its graphics core, there's one more new bennie to talk about: what Intel calls transactional synchronization extensions, or TSX.

TSX works to improve parallel processing. "We have parts today that we're shipping on the client side that go up to eight [execution] threads," Singhal said. "On the server side we support up to 20 threads per socket today, so if somebody puts together a four-socket system, they're supporting already up to 80 threads – and of course core counts will continue to go up on the server side."

To take advantage of those parallel threads, you of course need to develop parallel software. Duh. But that task can be a complete pain in the yinger, especially if all of those threads are working on the same data set in conjunction with one another.

Enter TSX, which endeavors to move the work of low-level optimization from the code writer over to the hardware upon which their workloads are running.

I won't dig deep into the intricacies of TSX except to say that it involves two technologies: hardware lock elision (HLE) and restricted transactional memory (RTM). Both require coders to insert what are essentially lock and unlock commands into their software that tell Haswell when to search out parallelism opportunities that are not explicitly written into the code itself.

Slide from Intel Developers Forum 2012 providing details of Intel's 4th Generation Core Processor, codenamed 'Haswell'

Okay, okay, okay – parallel programming is hella hard, so how abut some hardware help? (click to enlarge)

According to Singhal, HLE is more suitable for legacy code – developers will simply insert the XACQUIRE and XRELEASE commands into their code as appropriate, and RTM's XBEGIN and XEND commands give the devs more wiggle room. "[RTM provides] a little more flexibility, but a little more work, as well," he said.

Before we leave the compute cores and move on to Haswell's graphics and media enhancements, let's take a quick look at one last bit of compute-core goodness: advancements in the microarchitecture's ability to help shore up what Intel CEO Paul Otellini has called "the third pillar of computing," security – the other two pillars being energy efficiency and internet connectivity.

"Today, cryptography is huge," said Intel CPU architect Bret Toll at one IDF technical session. "It's very important. Every time you get on the web and do any kind of transaction, it gets encrypted and decrypted, sometimes multiple times."

Those cryptographic functions take time and processor power, so to speed them Intel has added new crypto-supporting instructions to the Haswell microarchitecture and improved existing architectural features that support encryption and decryption. The wide vectors in AVX2 provide more cryptographic oomph, as well.

Slide from Intel Developers Forum 2012 providing details of Intel's 4th Generation Core Processor, codenamed 'Haswell'

A bright, shiny dime to the first Reg reader who can decode each and every entry in this alphabet soup (click to enlarge)

Toll told his audience that although past microarchitecture generations had seen improvements in support for encryption and decryption, "As you can see, on Haswell I think we've hit it out of the park with security."

Beginner's guide to SSL certificates

More from The Register

next story
Fujitsu CTO: We'll be 3D-printing tech execs in 15 years
Fleshy techie disses network neutrality, helmet-less motorcyclists
Intel's LAME DUCK mobile chips gobbled by CASH COW
Chipzilla won't have money-losing mobe unit to kick about anymore
First in line to order a Nexus 6? AT&T has a BRICK for you
Black Screen of Death plagues early Google-mobe batch
Ford's B-Max: Fiesta-based runaround that goes THUNK
... when you close the slidey doors, that is ...
Disturbance in the force lets phones detect gestures with Wi-Fi
These are the movement detection devices you're looking for
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.