Related topics

Intel goes wide and deep with Xeon E5 assault

Blunting AMD's advantages

Four socket to me

That means, of course, that the Xeon E5-4600 is tag-teaming with the E7-4800 on four-socket Opteron 6200 boxes, with the E7 shooting high and the E5 shooting low and taking on AMD with different price points and feature sets.

Block diagram of a Xeon E5-4600 processor

Block diagram of a Xeon E5-4600 processor (click to enlarge)

The Xeon E5-4600 chip has many of the same feeds and speeds of the E5-2600, as you would expect since it is basically the same chip but with four QPI links on four processor playing ring around the rosy instead of two QPI links for two processors linking arms and doing windmills. (You remember doing both. You're not that old.) The Xeon E5-4600 has four memory channels per socket and up to three memory sticks per channel for up to 48 memory slots and a maximum of 1.5TB of memory shared across the four sockets.

Intel's Xeon E5-4600 processors

Intel's Xeon E5-4600 processors

The processors, which are made using Intel's 32 nanometer processes like other Sandy Bridge chips, come with variants with four, six, or eight cores, and some of the processors do not support Turbo Boost or HyperThreading. The three top-bin parts can use 1.5 volt, 1.6GHz main memory, as can the oddball (and HPC-aimed) E5-4617, which runs at the highest clock speed in the E5-4600 lineup at 2.9GHz and which does not support HyperThreading. The two low-bin parts support the cheaper and slower memory, and in the middle you can choose regular or 1.35 low volt memory running at 1.3GHz, 1.07GHz, or 800MHz.

The Xeon E5-4600 is aimed at four-socket blade servers and density-optimized rack servers that are sometimes used by supercomputer centers, sometimes by enterprises, and increasingly by businesses of all stripes in China. (Call it the China Syndrome, but for whatever reason, four-socket servers are more popular than two-socket machines in China.)

SMP scalability is the whole point of buying a four-socket node, and Intel is showing some pretty good numbers:

E5-2600s and E5-4600s

Relative performance of Xeon E5-2600s and E5-4600s (click to enlarge)

Intel trotted out the SPECint_rate2006 and SPECfp_rate2006 integer and floating point benchmarks as well as an internal server virtualization benchmark test to compare a two-socket server using the six-core Xeon E5-2630 – which has a 2.3GHz clock speed and has 15MB of L3 cache – to a four-socket machine using the six-core Xeon E5-4610, which clocks at 2.4GHz and which also has 15MB of cache. Both machines were configured using the C606 chipset – with the duo having 64GB of main memory and the quad having 128GB. As you can see, for these workloads at least, converting that two-socket machine into a four-socket box basically doubles the performance with little overhead for the SMP clustering.

And that is why Intel expects a number of customers – especially those doing server virtualization en masse – to give the Xeon E5-4600s some play in the data center, displacing at least some two-socket machines. Here's why:

The consolidation value of Xeon E5-4600 servers

The consolidation value of Xeon E5-4600 servers (click to enlarge)

By Intel's math, even if you pay slightly more for server hardware for these four-socket boxes (Intel reckons around $12,700 for a four-socket machine compared to just over $7,000 for a two-socket machine), it takes half the number of physical boxes to support the same workload, so you spend $71,800 less buying the bigger boxes at the numbers shown. And when you add in lower operating system licensing, power, cooling, and real estate costs over a four-year term, the quads save you around $264,000 over four years. (This comparison put Windows Server 2008 R2 Enterprise Edition on both sets of boxes, and that may not be the most accurate scenario, but it does drive most of those savings above - about $200,000, in fact.) In any event, Intel says the quad can save you 24 per cent off the total cost of ownership compared to the duo. Do your own comparisons when the servers are out.

It's a pity, then, that the Sandy Bridge-EP design didn't have three QPI links between two sockets. If so you could probably do a glueless eight-socket box and be a max of three hops between any two CPUs in an SMP cluster. Maybe that's what Intel has in store for some future Xeon. ®

Sponsored: Designing and building an open ITOA architecture