Feeds

Facebook's open hardware: Does it compute?

Open hardware is not open source

Security for virtualized datacentres

Comment What happens if, as we saw at the launch of Facebook's Open Compute Project on Thursday, the design of servers and data centers is open sourced and completely "demystified"?

If open source software is any guide, hardware infrastructure will get better and cheaper at a faster rate than it might otherwise. And someone is going to try to make money assembling hardware components into "server distros" and "storage distros", and perhaps even sell technical-support services for them, as Red Hat does for the several thousand programs it puts atop the Linux kernel.

But even if the Open Compute project succeeds in some niches, don't expect for open source hardware to take over the world. At least not any time soon in the established Western economies – although in any greenfield installation in a BRIC country, anything is possible.

Proprietary systems built by traditional manufacturers and their very sticky applications and databases have lingered for decades. The general-purpose tower and rack-mounted servers, built usually by one of the big five server makers – HP, Dell, IBM, Oracle, or Fujitsu, in descending order – used by most companies today and usually running Windows or Linux, will also linger as well.

Companies have their buying habits, and they have their own concerns about their business. Being green in their data centers is generally not one of their top priorities – managing their supply chains and inventories, paying their employees, and watching their capital expenditures are. For most companies, even in 2011, data center costs are not their primary concern.

This is obviously not true of a hyperscale web company such as Facebook, which is, for all intents and purposes, a data center with a pretty face slapped on it for linking people to each other. At Facebook, the server and its data-center shell is the business, and how well and efficiently that infrastructure runs is precisely what that business is ultimately all about.

Facebook has designed two custom server motherboards that it is installing in its first very own data center, located in Prineville, Oregon. These servers, their racks, their battery backups, and the streamlined power and cooling design of the data center (which is cooled by outside air) are all being open sourced through the Open Compute project. There will no doubt be many other server types and form factors that Facebook uses (and maybe even instruction sets) as the company's workloads change throughout what we presume will be its long history.

The whole point of the Open Compute designs put out by Facebook on Thursday is that they are minimalist and tuned specifically for the company's own workloads. Amir Michael, a hardware engineer who used to work for Google and who is now the leader of the server-design team at Facebook, said that the company started with a "vanity free" design with the server chassis. There's no plastic front panel, no lid, no paint, as few screws as possible, and as little metal as possible in the chassis – just enough for it to still stay rigid enough to hold components. Here it is:

Facebook Open Compute chassis

Vanity-free server chassis

The chassis is designed to be as tool-less as possible, with snaps and spring-loaded catches holding things to the chassis, and the chassis into the rack. Nothing extraneous. Nothing extravagant. The chassis is actually 2.6 inches tall - that's 1.5U in rack–form factor speak - which means the servers get more airflow than a standard 1U pizza box machine, and that Facebook can put in four 60mm fans. The larger the fan, the more air it can move in a more efficient manner - and usually, more quietly too.

The taller box also allows Facebook to use taller heat sinks, which are also more efficient at cooling processors. It has room for six 3.5-inch disk drives, mounted in the back, contrary to conventional server wisdom – you generally don't want to blow hot air over your disks. But if you have a clustered system with failover and your workload can heal over the failures, then you don't really care if the disk is a little warm.

Providing a secure and efficient Helpdesk

Next page: Server minimalism

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.