This article is more than 1 year old

Intel shows off 'disaggregated' rack of servers, storage, and networking

Breaking up is no longer hard to do

At the "Avoton" Atom C2000 chip launch on Wednesday in San Francisco, Intel showed off several of the components of its Rack Scale Architecture working in concert and also announced a partnership with Microsoft to push the idea on its Windows platforms for data centers and on the Windows Azure public cloud.

Intel has already explained back in July that it was trying to "re-architect the data center," and the technologies that Chipzilla demonstrated were the first wave of others that it plans to bring to market to chip servers, storage, and networks into bits and reassemble them on the fly as pools of compute, networking, and storage at the rack level.

This is different from having storage embedded in the server or tied directly to a server through Fibre Channel or PCI Express switches or over iSCSI links. With Rack Scale, as El Reg has said before, the data center is the new rack and the rack is the new server.

In this case, Intel is using high-speed, fat bandwidth silicon photonics links to glue processing units to storage units within the rack, thereby allowing for these components to be replaced easily and independently.

The news today at the Avoton launch was that Intel has given a name to the fiber optic cables it has designed in conjunction with Corning, which knows a thing or two about glass, as well as the connectors and ports that will be used to lash server, storage, and network components together. Intel demonstrated the components working together for the first time.

Generally, big data centers have copper cables to link servers to each other through top-of-rack switches and then fibre optic cables to link each rack to an end of row aggregation switch that feeds to the outside world or perhaps to routers that link to other similar data centers if the workload is geographically dispersed.

Those copper cables are a big problem, and using silicon photonics links and smaller fibre optic cables inside the rack is going to do more than just allow different components in a system to be physically distinct and therefore independently upgradeable.

Data Center GM Diane Bryant weighing old and new cables

Data Center GM Diane Bryant weighing old and new cables

"There is a real need to miniaturize and gain greater efficiency and lower cost," explained Chris Phillips, who is general manager of program management for Windows Server and Systems Center at Microsoft, who was on hand to announce that Intel and Microsoft were teaming up to create the next-generation cloud scale architecture for Big Steve Stephen.

"With simple physical things like cabling, you just don't think about it until you are in a data center with 200,000 servers or 100,000 servers in it. And then you realize, wow, wires really are hard. They block airflow, and they do all of these evil things to you, and humans touch them and they screw them up. So we are excited to be engaged again with Intel on architecture and design."

Intel has already kicked off projects with the Facebook-led Open Compute Project to bring silicon photonics to rack components, and has a similar but distinct project underway in Asia with Alibaba, China Telecom, Baidu, and Tencent called Project Scorpio.

The new MXC connector currently has 32 individual fibers and mechanical models done by Intel show that it will be able to group together up to 64 fibers. Each fiber will be able to handle 25Gb/sec of bandwidth, and thus a full cable will be able to push and pull 1.6Tb/sec.

The ClearCurve cable was designed by Intel and Corning such that it can be 300 meters long, about three times the current length of fibre optic cables used in data centers today, and still push that 25Gb/sec bandwidth on each fibre in the cable.

The demo Rack Scale system mixes Atom and Xeon servers with storage and a new switch

The demo Rack Scale system mixes Atom and Xeon servers with storage and a new switch

In the demo Rack Scale system that Intel was showing off at the Avoton launch, it had two enclosures of single-socket Avoton Atom nodes that were 2U high and that could cram 42 processor cards into that enclosure. This is much denser packaging than the prototype systems Intel was showing off in July.

The setup also had two Xeon server nodes, each with two half-width, two-socket server nodes. The server nodes were linked to each other using the MXC connectors and ClearCurve optic cables through multiple silicon photonics switch modules in the server enclosures.

Intel also rolled out a new switch ASIC, the FM5224, which has a total of 72 ports running at the 2.5Gb/sec speed that matches the integrated Ethernet NICs on the Avoton Atom processors. The microserver switch, as it is being called, also has eight 10Gb/sec or two 40Gb/sec uplinks. NEC, Super Micro, and Quanta are building switches based on this FM5224 chip.

Intel also put a plain vanilla disk array in the demo and linked it up to the server nodes over Ethernet links, and the demo was a few seconds of Jason Waxman, who runs the Cloud Platforms Group, dynamically allocating disks to Atom and Xeon server nodes and showing the bandwidth coming through the lightpipes.

In addition to the Rack Scale demo, Diane Bryant, who is general manager of Intel's Data Center and Connected Systems Group, said that Intel was rolling out a new multi-system management controller that would be able to manage up to eight individual server nodes. Intel has also created a memory DIMM design called the Compressed Footprint Connector that allows twice as much memory to be crammed into a server slot. ®

More about

TIP US OFF

Send us news


Other stories you might like