Intel goes wide and deep with Xeon E5 assault
Blunting AMD's advantages
If you were planning on buying new servers in the coming weeks and months, Intel just gave you a whole lot of homework. And if you work at Advanced Micro Devices, you're getting some homework, too.
Intel already has a slew of E5-2600 processors aimed at workhorse two-socket machines and a bunch of E7s in different flavors aimed at machines with two, four, or eight sockets. There's E5-1600 processors, predominantly aimed at workstations, and also last year's Xeon E3-1200 processors and Xeon E3-1200 v2 chips for single-socket servers and workstations, launched today. But wait, that's not all you get. With the full revamping of the Xeon lineup today, Intel is adding with 17 more "Sandy Bridge" E5 processors for either two-socket or four-socket boxes.
Now server-makers and their customers will be given a bewildering number of ways to make a Xeon server that has specific CPU, memory, and I/O configurations. And you need to compare these against new Opteron 3200, 4200, and 6200 processors from Advanced Micro Devices if you want to really do your homework.
That said, more SKUs with different prices for different features is generally a good thing for server shoppers, even if you need to shop a little more carefully than you might have in the past.
With the "Sandy Bridge-EN" Xeon E5-2400 and "Sandy Bridge-EP" Xeon E5-4600 launched today, Intel is basically downshifting its existing two-socket and four-socket processors in terms of both features and price to better chase specific markets – and to keep AMD off-balance as it has tried to position its Opteron 4200 and 6200 chips as the cheaper and more core-heavy alternatives to Intel's Xeon E5-2600 and E7-4800 for machines with two or four sockets, respectively.
"We think the E5-2400 will be the preferred product for the HPC market," Dylan Larson, Xeon platform marketing director, tells El Reg. Larson says that the E5-4600, with its denser four socket format, will be "killer for HPC" as well when customers want fatter nodes in their clusters. Moreover, because of the lower pricing on the chips and chipsets compared to the much more expansive Xeon E7 family (in terms of QuickPath Interconnect, memory, and I/O bandwidth and memory and I/O capacity as well as core counts), Larson expects an expansion of the market for four-socket servers more than cannibalization of the E7s in the two-socket and four-socket arenas.
And while Intel doesn't say this explicitly, the much wider Xeon lineup is also being driven in part by OEM customers that are building storage arrays and networking gear based on Xeon chips rather than on PowerPC or proprietary parts. These customers have their own performance, feature, and pricing demands, and if Intel is to double the revenue stream for its Data Center and Connected Systems Group to $20bn by 2015 – as it plans to do – it is going to have to field products that not only compete against AMD in the server racket, but also compete against other circuits that have little to do with servers. Well, excepting that they feed them with data and connect them to the outside world, of course.
A snip and a twist
If you recall the Xeon E5-2600 launch from early March, these chips, which plug into the LGA1356 or Socket R socket, had two QPI links between the sockets, allowing for a massive amount of data interchange between the sockets so they could share a relatively large amount of I/O across those two sockets and also drive PCI-Express 3.0 controllers on the chips and many other I/O devices hanging off the "Patsburg" C600 series of chipsets. Simply put, the new Xeon E5-2400 is a similar "Romley" platform design with one of those QPI links between the processor sockets snipped off, while the Xeon E5-4600 design takes those dual QPI links coming off each processor to gluelessly connect the processors into a four-socket ring.
There is a performance penalty jumping from one processor socket to the one furthest away – two hops instead of one – but plenty of SMP servers have been architected this way and show decent enough performance. If you need better SMP scalability and more reliability, then the E7 is what you need. But remember that the E7 is a relatively expensive box also, due to its buffered memory cards, which help boost its performance and allow it to scale to eight sockets in a single system image.
And for the record: Intel has no intention of scaling up the E5-4600 to eight sockets and is "making a ton of investments" in future E7 designs. Intel has been mum about exactly what those future E7 plans might be, and Larson was not at liberty to discuss it further.
Both new families of Xeon chips announced today support features that debuted with the Xeon E5-2600s back in March, including the Advanced Vector Extensions (AVX) vector math unit (which can do two 128-bit or one 256-bit floating point operation per clock), Turbo Boost 2.0 clock frequency boosting (which is a lot more sophisticated than the originally), on-chip I/O processing (including PCI-Express 3.0 controllers), and Data Direct I/O, which allows Ethernet controllers and other I/O adapters to directly route traffic to processor L3 cache memory instead of making multiple ricochets in and out of main memory as prior generations of Xeon chips did.
Intel's Trusted Execution Technology (TXT) security feature for operating systems and hypervisors are on all of the new Xeon chips, as are the AES-NI instructions for doing AES encryption and decryption in silicon instead of in software. Most of the chips have Turbo Boost as well as HyperThreading, which is a layer of abstraction etched into the Xeon circuits that virtualizes a core to make it look like two virtual threads to operating systems and hypervisors. A few models in the rounded-out Xeon lineup based on Sandy Bridge cores do not have Turbo Boost or HyperThreading. Generally, when Intel deactivates a feature, it charges less money or gooses the performance of some other feature.
Sponsored: IT evolution to a hybrid enterprise