This article is more than 1 year old

Facebook's new Open Compute V2 servers

AMD and Intel do boards for high freaky trading

Now that Intel and AMD have finally launched their respective Xeon E5-5600 and Opteron 6200 processors for two-socket servers, the Open Compute Project, a foundation created by Facebook to open source its data center technologies, can finally divulge the feeds and speeds of the Open Compute V2 machines.

That's precisely what happened at the Open Compute Summit held today in San Antonio at the offices of Rackspace Hosting, which loves Open Compute hardware and OpenStack cloud fabrics. The new OCP V2 servers, as El Reg previewed last June, are double-stuffed machines that cram two two-socket x86 servers and their power supplies and fans into a 1.5U Open Compute chassis.

The original Open Compute V1 machines, which were deployed in Facebook's Prineville, Oregon data center, put a single two-socket server in a slightly different vanity-free, bare-bones 1.5U chassis. There were two Open Compute V1 machines, one based on AMD's Opteron 12-core 6100 processor with a relatively heavy memory footprint and another based on Intel's six-core Xeon 5600 processor with slightly less memory.

Facebook has not yet taken pictures of the V2 machines, so we have to do with mechanicals like this one:

Facebook's latest double-stuffed server chassis

Facebook's latest double-stuffed server chassis

The Intel mobo used in the V2 server design is code-named "Windmill," Frank Frankovsky, director of hardware design and supply chain at Facebook and one of the founders of the Open Compute Project, tells El Reg. You can get the full specs for the Windmill system here.

The Windmill board looks very much like other so-called "twin" server, or half-width, system boards on the market. This one is certified to run the Xeon E5-2600 models that burn 115 watts or less. So forget about using the 130 or 135 watt parts, the Xeon E5-2670 running at 2.6GHz and with eight cores is your top bin part for this machine.

Facebook has certified DDR3 main memory running at anywhere from 800MHz to1.6GHz in this machine, with a max of 512GB of memory using 32GB sticks in its sixteen memory slots. You can use 1.5 volt of 1.35 volt memory, and load-reduced (LR-DIMM) sticks are also supported for extra performance with all the memory channels stuffed on the box.

The Windmill board has four PCI-Express 3.0 slots (one x16, one x8, and two x4s) plus another eight PCI-Express 2.0 slots that can hang of the x4 PCI PCH uplink on the "Patsburg" C602 chipset used on the board. The board has one PCI-Express x8 mezzanine card and two integrated Intel NICs on the board with a total of three ports running at Gigabit Ethernet speeds.

The AMD Opteron 6200 mobo for the V2 double-stuffed Facebook machines being open sourced through the OCP are known by the code-name "Watermark," and you can find the specs for this mobo here. As was the case with the V1 machines, the AMD board is heavier on both the cores and the memory capacity, but the Intel board will now win hands down on I/O bandwidth thanks to the PCI-Express 3.0 slots driven by the on-chip controllers in the Xeon E5-2600s. AMD's Opteron 6200s are still pushing PCI-Express 2.0 slots.

As was the case with the Xeon-based Windmill mobo, the Opteron-based Watermark mobo tops out with processors rated at 115 watts in the Opteron 6200 line. Which means you can use all the Opteron 6200 parts except the 2.6GHz, 16-core Opteron 6282 SE and that your top bin is the Opteron 6272 running at 2.3GHz, costing about half of the top-bin Xeon E5 part that can plug into the Open Compute V2 chassis, by the way.

The Opteron 6200 can support up to 384GB per socket, but Facebook is topping it out at the same 512GB across sixteen memory slots. (Go figure.) Depending on the chipset (SR5650, SR5670, or SR5690) that an ODM chooses, you can a varying number of PCI-Express 2.0 slots. Fully loaded with the SR5690, you can have one x16, on x4 for the MiniSAS drive, one x4 mezz card, and a Intel dual-port Gigabit Ethernet NIC.

Regardless of what motherboard you use in the Open Compute V2 chassis, you can cram some 3.5-inch disks in the cage in the back of the power supply for the two boards, which is rated at 700 watts, as well as in front of each motherboard, for a total of six drives shared by the two servers.

Frankovsky tells El Reg that the V2 machines slide into the triple racks that were launched with the V1 machines back in April 2011, and with some modifications they will be able to slide into the new Open Rack server and storage enclosure that Facebook also announced at the Open Compute Summit this week.

Future Open Compute for freaky trading

In addition to these two boards, Intel and AMD have been cooking up their own motherboards, called "Decathlete" and "Roadrunner" respectively, aimed at the financial services industry and suitable for the Open Compute V1 chassis with the performance oomph you need for high frequency trading and similar workloads.

The Intel Decathlete board is based on the Xeon E5-2600 and will support the full range of processors, including the 130 watt and 135 watt (and possibly the 150 watt E5-2687W part aimed at workstations, which tops out at 3.1GHz for eight cores). This board, which you can see here, has 24 memory slots and has a modular I/O setup that lets users plug in quad-port Gigabit Ethernet, dual-port 10GBase-T, dual-port 10GE, or single-port FDR (56Gb/sec) InfiniBand network interface cards.

This Intel board is apparently designed for 1U and 2U chassis, so it is not clear how this will be rectified with the 1.5U Open Compute V1 chassis. It looks like Intel is reverting to the old 1U and 2U heights and still fitting them into the Open Compute racks at some point in the future and open sourcing the designs through the Open Compute Project now.

For 2U machines the Decathlete mobo, which measures 16.5 inches deep by 16.7 inches wide, has three PCI-Express 3.0 x8 slots. The processors and memory are in the front of the board, with a block of twelve memory slots dead center, the two CPUs plunked next to the memory, and then wrapped, on each side, by another six memory slots on the outside edges of the two CPUs. This arrangement means cool air flows over memory and CPUs from the outside.

The AMD board for high freaky traders is the Roadrunner, and you can see its specification here. This is a two-socket Opteron 6200 box with 24 memory slots, also arranged so the CPUs and memory are all lined up across the back of the machine to promote better cooling.

This board is 16.5 inches wide by 16 inches deep, and has six SATA ports along the back. It is designed to accommodate the current Opteron 6200 and future "Abu Dhabi" processors from AMD, including SE parts. It will top out at 768GB of main memory when fully loaded, and the specification calls for 1.25 volt as well as 1.35 volt and 1.5 volt DDR3 memory.

This Roadrunner design will come in 1U, 1.5U, 2U, and 3U chassis designs that will not initially slide into the Open Racks, but will be modified to do so eventually. Here's the matrix of features for each of the four Roadrunner configs:

 Open Compute AMD Roadrunner matrix

Option matrix for AMD's Roadrunner Open Compute mobo (click to enlarge)

It looks like the 1U server is being aimed at HPC workloads, the 1.5U at Facebook and anyone else that has the Open Compute triple racks, the 2U box at general purpose machines, and the 3U box as a storage server. ®

More about

TIP US OFF

Send us news


Other stories you might like