Original URL: https://www.theregister.com/2011/04/08/open_compute_server_comment/

Facebook's open hardware: Does it compute?

Open hardware is not open source

By Timothy Prickett Morgan

Posted in Channel, 8th April 2011 06:00 GMT

Comment What happens if, as we saw at the launch of Facebook's Open Compute Project on Thursday, the design of servers and data centers is open sourced and completely "demystified"?

If open source software is any guide, hardware infrastructure will get better and cheaper at a faster rate than it might otherwise. And someone is going to try to make money assembling hardware components into "server distros" and "storage distros", and perhaps even sell technical-support services for them, as Red Hat does for the several thousand programs it puts atop the Linux kernel.

But even if the Open Compute project succeeds in some niches, don't expect for open source hardware to take over the world. At least not any time soon in the established Western economies – although in any greenfield installation in a BRIC country, anything is possible.

Proprietary systems built by traditional manufacturers and their very sticky applications and databases have lingered for decades. The general-purpose tower and rack-mounted servers, built usually by one of the big five server makers – HP, Dell, IBM, Oracle, or Fujitsu, in descending order – used by most companies today and usually running Windows or Linux, will also linger as well.

Companies have their buying habits, and they have their own concerns about their business. Being green in their data centers is generally not one of their top priorities – managing their supply chains and inventories, paying their employees, and watching their capital expenditures are. For most companies, even in 2011, data center costs are not their primary concern.

This is obviously not true of a hyperscale web company such as Facebook, which is, for all intents and purposes, a data center with a pretty face slapped on it for linking people to each other. At Facebook, the server and its data-center shell is the business, and how well and efficiently that infrastructure runs is precisely what that business is ultimately all about.

Facebook has designed two custom server motherboards that it is installing in its first very own data center, located in Prineville, Oregon. These servers, their racks, their battery backups, and the streamlined power and cooling design of the data center (which is cooled by outside air) are all being open sourced through the Open Compute project. There will no doubt be many other server types and form factors that Facebook uses (and maybe even instruction sets) as the company's workloads change throughout what we presume will be its long history.

The whole point of the Open Compute designs put out by Facebook on Thursday is that they are minimalist and tuned specifically for the company's own workloads. Amir Michael, a hardware engineer who used to work for Google and who is now the leader of the server-design team at Facebook, said that the company started with a "vanity free" design with the server chassis. There's no plastic front panel, no lid, no paint, as few screws as possible, and as little metal as possible in the chassis – just enough for it to still stay rigid enough to hold components. Here it is:

Facebook Open Compute chassis

Vanity-free server chassis

The chassis is designed to be as tool-less as possible, with snaps and spring-loaded catches holding things to the chassis, and the chassis into the rack. Nothing extraneous. Nothing extravagant. The chassis is actually 2.6 inches tall - that's 1.5U in rack–form factor speak - which means the servers get more airflow than a standard 1U pizza box machine, and that Facebook can put in four 60mm fans. The larger the fan, the more air it can move in a more efficient manner - and usually, more quietly too.

The taller box also allows Facebook to use taller heat sinks, which are also more efficient at cooling processors. It has room for six 3.5-inch disk drives, mounted in the back, contrary to conventional server wisdom – you generally don't want to blow hot air over your disks. But if you have a clustered system with failover and your workload can heal over the failures, then you don't really care if the disk is a little warm.

Server minimalism

The same minimalist design philosophy applies to the two motherboards that Facebook designed in conjunction with Quanta Computer, the Taiwanese ODM, which is also a PC and server maker in its own right. Facebook's workloads don't require a lot of peripheral expansion, so unnecessary slots are removed. The motherboards have CPU and memory voltage regulators that have in excess of 93 per cent efficiency, and the chassis is equipped with a power supply that runs at 94.5 per cent efficiency.

There's a 277 volt main power input and a 48 volt power input from backup batteries that are adjacent to the triple-wide 42U rack. That battery power is just to give Facebook's systems time to cut over from main power to generators without crashing in the event of a power failure.

Here's what the rack looks like:

Facebook Open Compute triple rack

The Facebook Open Compute triple rack

The triple rack has two top-of-rack switches at the top and can house 30 of the Open Compute servers in each column, for a total of 90 servers.

For the moment, Facebook has two motherboards, one based on Intel processors and chipsets and the other based on chips from Advanced Micro Devices. These machines are not, as we were speculating, micro servers, but as we pointed out ahead of the Facebook launch, the company's Facebook Lab has only begun testing micro servers and did not expect to roll them until later in 2011 or 2012.

The Intel motherboard that Facebook has designed with Quanta to snap into the Open Compute chassis looks like this:

Facebook Open Compute Intel mobo

This Intel board uses the company's 5500 chipset and supports two quad-core Xeon 5500 or six-core Xeon 5600 processors; it can take any processor running 95 watts or cooler. It has nine memory slots per socket, for a maximum memory of 288GB using 16GB memory sticks. It has six SATA-II ports for linking to the drives, two external USB 2.0 ports and one internal for a flash-based hypervisor (not that Facebook virtualizes its workloads, but it could). It has three Gigabit Ethernet ports.

The AMD board is a bit beefier on the core and main memory:

Facebook Open Compute AMD mobo

This AMD option of the Open Compute mobo can support the Opteron 6100 processors with either eight or twelve cores. Only those chips with an ACP rating of 85 watts or less can be used in the chassis. Each G34 socket has a dozen memory slots, for a maximum of 384GB of main memory using 16GB memory sticks. This mobo uses AMD's SR5650/SP5100 chipset, and offers the same six SATA ports, USB ports, and Gigabit Ethernet ports as the Intel board.

Both the Intel and AMD boards can be operated in a single-CPU mode if a job that Facebook is running needs richer memory-to-compute ratio for a server node.

The upshot of the server design, according to Frank Frankovsky, director of hardware design and supply chain at Facebook, is that the Facebook servers cost about 20 per cent less than the boxes the company was previously using and have 22 per cent less metal and plastic in them. And when plugged into the Prineville data center, these servers consume 38 per cent less power.

Does Facebook need a server maker?

There are many lessons that the traditional server makers will immediately learn from the Facebook server designs and the Open Computer project.

The first thing is that hyperscale data center operators not only don't want to use general purpose machines, but to extract the most money from their businesses as possible, they can't use them. And that will not change as long as advertising on the Internet is a cut-throat business and consumers are unwilling to pay a lot of money for an application or a service (the difference is moot at this point). General purpose machines, with all that plastic and metal, with their service processors and wide variety of slots and peripheral options, are too expensive for Facebook, just as they have been for more traditional supercomputing cluster customers that have long-since preferred bare-bones boxes.

The second thing hyperscale Web customers - and maybe soon even enterprise and midrange shops - will figure out pretty damned quick is that once they have virtualized you server workloads and have high availability and failover built into your software stack, they won't need all those extra features either. And they will want the cheapo, minimalist servers, too. And they may not even go to the HPs, Dells, and IBMs of the world to get them. They may go straight to Quanta Computer for the motherboards, go straight to whoever is bending the metal for the chassis and the racks for Facebook, and go straight to disk and memory makers for those components, too.

It was telling that Dell's Data Center Solutions unit, which has been doing custom servers for four years and has been building bespoke machines for Facebook for the past three years, was at the event. While Forrest Norrod, vice president of Dell's server platforms, said that Dell was now building systems based on the two Facebook motherboards, he did not say that DCS was building servers for Facebook any more.

Now, extrapolate to those young upstart companies in the 20 top-growth economies of the world. Are they going to go for PowerEdge-C quasi-custom boxes from Dell, or ProLiant SL tray servers from HP, or iDataPlex servers from IBM, or will they watch carefully what Facebook does and just try to buy the cheapo boxes Facebook has designed at wholesale prices instead of retail? I think we know the answer to that question. Did China wire itself with land lines when it created a real economy a decade ago? No, China went straight to cell phones.

The question is will Open Compute actually foment a community of hardware designers and open source specs, especially when the community that is most in need of super-efficiency does not like to share information about their servers, storage, software, and data centers because this is, in actuality, the very essence of the company. I think the answer is, sorry to say, probably not. Hardware costs real money, but twiddling around with bits of open source code doesn't really cost open source coders anything.

Who knows? Perhaps service companies all around the world will spring up, bending metal and building Open Compute boxes and offering add-on tech support or other services for these machines. It would be a very interesting way to get some new players with new ideas into the server racket. At best, there might be one or two Open Compute distributors some day, but that might be just enough to change the server business from a push - buy what we got - to a pull - what do you want? ®