Feeds

Intel wants to reconstruct whole data centers with its chips and pipes

The rack is the new server, and the data center is the new rack

Combat fraud and increase customer satisfaction

A sneak peek at server disaggregation in action

Jason Waxman, who is general manager of the Cloud Platforms Group within the Data Center and Connected Systems Group, got to do a little bit of show-and-tell with several technologies that Chipzilla has been talking about since the Open Compute Summit earlier this year and relating to some 100Gb/sec silicon photonics interconnects that were created to link the elements of "exploded servers" back together when they are put into racks to create those pools of compute/memory, I/O, and storage that Bryant was talking about above.

Networking based on silicon photonics becomes a new kind of rack backplane for server and storage nodes

Networking based on silicon photonics becomes a new kind of rack backplane for server and storage nodes

"Customers are not only asking us to meet Moore's Law, but to beat Moore's Law," explained Waxman. That's a tall order, of course.

To beat Moore's Law scaling in the data center, Intel thinks that cloud infrastructure – literally meaning the collection of servers, storage, and networking – has to change.

First, customers will want to keep the uniformity of the instruction set – and Intel's presumption is that this means keeping the x86 instruction set it controls with its various processors – but that they will also want to have different processing elements optimized for very specific workloads running in the data center.

Hence the rush to get server-class Atom processors into the field as well as to keep the core counts and performance going up on the Xeon E5 and E7 processors and the addition of the Xeon Phi parallel x86 coprocessor, which is getting traction in the high performance computing space.

The other thing that cloud operators are looking for is the ability to "compose" server infrastructure on the fly inside of a rack from that pool of compute, memory, storage, and I/O.

Obviously, no one is able to do this quite yet. But the Rack Scale architecture that Intel is proposing to its data center hardware manufacturing customers is a step in that direction. Server racks already have shared power, cooling, and management, and with the Rack Scale design, Intel is working with partners to break I/O separate from the server nodes and use optical interconnects and switching components in the rack and server enclosure to create a rack-level fabric.

This is something that Egenera was trying to do a decade ago with its BladeFrame machines without the benefit of optical interconnects, you will recall.

In the long run, the goal is to have those pools of compute, memory, storage, and I/O under the control of an orchestration layer, and when you are done you have the dreamy and fluffy "software-defined infrastructure" that Intel is proposing as the vision of the future.

Waxman brought out two servers that are based loosely on the Open Compute vanity-free server enclosure, one based on Xeon processors and the other on Atoms, and both having virtualized I/O in the back of the server enclosure and using light pipes to link to switching at the top of the rack. Here's the three-node Xeon variant:

Intel cloud GM Jason Waxman shows off a three-node server chassis with virtualized I/O

Intel cloud GM Jason Waxman shows off a three-node server chassis with virtualized I/O

And here's one that is sporting 30 microservers based on the forthcoming "Avoton" Atom C2000 processor, which has eight cores and integrated Ethernet network interfaces on the die:

A chassis stuffed with 'Avoton' Atom C2000 server nodes and disaggregated I/O

A chassis stuffed with 'Avoton' Atom C2000 server nodes and disaggregated I/O

By shifting to a three-node chassis, the compute node density in the rack is boosted by 50 per cent compared to the common double-wide server enclosures in use today in cloud data centers. In Intel's comparison, those server nodes have Gigabit Ethernet downlinks from the top-of-rack switch, which in turn has four 10Gb/sec Ethernet links to the outside world.

By shifting to the silicon photonics networking in the Rack Scale architecture, Waxman says Intel can cut the cabling by a factor of three while boosting downlink performance by a factor of 25 and uplink speeds by a factor of 2.5.

This particular rack comparison has a silicon photonics patch panel with 25Gb/sec downlinks. Instead of running one copper LAN cable to each node, you run one Intel optical cable to each of the 14 enclosures in the rack, and network capacity is distributed by the patch panel.

There is a single 100Gb/sec uplink that comes out of the rack and that is shared by all of the servers and it compares favorably to the 40Gb/sec of aggregate bandwidth coming out of the top of rack switch used in a typical cloud setup in data centers today.

The other neat thing about these hypothetical servers engineered by Intel is that they all draw their power from the rack at 12 volts. The rack still has six power supplies plus a hot spare for redundancy, as you want in an enterprise-class product, it is just shared across the rack instead on inside a single blade server enclosure.

But the real game changer is this: With this Rack Scale architecture, Intel can provide hyperscale data center operators with a more modular setup, allowing them to, for instance, swap out server nodes without even touching the networking configurations of those nodes, which are configured inside the chassis and not inside the node. And this is music to the ears of cloud operators.

Without naming names, Waxman said that one big cloud operator was able to get their hands on "Sandy Bridge" Xeon E5 chips six months early, and the price/performance and performance/watt improvements of those chips versus the chips it had in its early systems was able to save that unnamed company (almost certainly Google but maybe Microsoft or Amazon Web Services) a stunning $200m in operating costs. So being able to swap out processors easily is a big deal to cloud operators.

It will be even more interesting when Intel can break CPU modules separate from memory modules, and Waxman tells El Reg that this is the next phase in Rack Scale development.

Processor technology changes faster than memory technology does, and both represent about a quarter of the typical cost of a single-node the system in El Reg's reckoning based on a reasonably beefy setup. Being able to change one without the other would be very cool indeed, so long as performance doesn't take a big hit. ®

3 Big data security analytics techniques

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
BOFH: Oh DO tell us what you think. *CLICK*
$%%&amp Oh dear, we've been cut *CLICK* Well hello *CLICK* You're breaking up...
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
prev story

Whitepapers

SANS - Survey on application security programs
In this whitepaper learn about the state of application security programs and practices of 488 surveyed respondents, and discover how mature and effective these programs are.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Top three mobile application threats
Learn about three of the top mobile application security threats facing businesses today and recommendations on how to mitigate the risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.