This article is more than 1 year old

Facebook reveals next-gen Open Compute wares

Double your servers, double your fun

Bumped performance, looser thermal margin

It's too early to tell what kind of performance boost to expect from the future servers, but Michael says that given the coming increases in core counts, clock speeds, memory capacity and speed, and other factors, Facebook expects a server node to deliver at least 50 per cent more oomph on its workloads. And this time around, there's a bit of thermal-envelope margin should Facebook want to goose a component or two.

The future motherboards that Facebook designed with Quanta have extra I/O lands to support 10 Gigabit Ethernet ports, and PCI Express mezzanine cards to add more I/O capability. The machines also do away with external baseboard management controllers on the Intel mobos, exploiting Intel's Management Engine BIOS Extension and a subset of functions in Intel's chipsets to do all the remote management functions that the BMC service processor was doing. This functionality is not available in the AMD-based machines, but Facebook is going with a barebones BMC rather than some high-end – and relatively costly – option.

Facebook goes through four phases as it rolls out servers: EVT, DVT, PVT, and mass production. EVT is short for engineering verification and test, when prototype boards come back from ODM partners and low-level signal checking is done on components.

The design verification and test phase – DVT – comes next, when a set of higher-level tests are done on prototype systems. In this phase, Facebook looks for system flaws and also performs early tests on its software stack.

The PVT phase – production verification and test – requires component suppliers to simulate their production of components and completed systems and deliver completed systems, preinstalled in racks, to Facebook data centers. Production workloads are run on the boxes in the PVT phase, and once they pass muster, Facebook places the big order and mass production begins.

In addition to engineering the new servers, Facebook also had to tweak the battery backups to handle the additional load. The battery cabinets that Facebook designed as companions to its rack servers can now take 85 kilowatts of load, up from 56 kilowatts in the first generation of machines.

Facebook Open Compute storage array

The Open Compute storage array

Michael also showed off a storage array that puts two disk controllers and two sets of 25 disk drives into a single chassis. The blog post says that the design provides flexibility by allowing you to vary the ratio of storage capacity to compute capacity to reflect the needs of different workloads.

This disk array is still in its testing phases, so Michael was a bit cagey about what is in the box, but it reminds me of Sun Microsystems' "Thumper" X4500 storage arrays, which were based on a two-socket Opteron motherboard with six eight-port SATA disk controllers on the board.

In both the Facebook and Sun arrays, the disk drives mount vertically into the chassis from above, rather than horizontally as they usual in servers. It looks like the Open Compute storage array is doing five rows of five per block, and putting two blocks into the box.

Given the stinginess of hyperscale data center operators, those disks are almost certainly cheap 3.5-inch SATA drives. ®

More about

TIP US OFF

Send us news


Other stories you might like