This article is more than 1 year old

One day all this could be yours: Be Facebook, without being Facebook

The pros and cons of Open Compute

Data centre design is a costly business, costing Apple $1.2bn for a pair of “next-generation” carbon-neutral plants in Ireland and Denmark. Even the smallest average Joe data centre will easily cost north of $1m once the options such as multihomed networks, HVAC systems and other such critical kit is installed with redundancy, security and a thousand other things that only data centre designers think about ahead of time.

Complexity is the enemy of efficiency and drives costs up. At this point, a lot of companies realised there had to be an easier and cheaper way to do it without paying vendor premium tax out the rear. Fortunately, in these modern times, there is: the open compute movement.

Data centre design is usually extremely proprietary, jealously guarded by vendors as if it were a trade secret. In reality, driving down costs means less capital expenditure and a better bottom line in simple terms.

Google and Amazon are not going to give away their secret sauce to any potential competitor, but Facebook has, open-sourcing their hardware designs with the Open Compute Project in 2011. Open Compute is still in its infancy, but holds a lot of promise for data-intensive and large, cloud-based organisations.

Those of use outside the hyper-scale tier of web computing will be thinking: “Yeah, so what? I ‘ain’t no Google”. But Open Compute could end up helping even the smallest of compute-intensive players.

The latest wave of computing designs have seeing hardware set-ups change from high-power pools of CPU with masses of redundant technology to low-power, discrete units expected to fail, with other nodes taking the slack whilst a new instance is spun up on a new node.

This new infrastructure design can be seen in HP’s Moonshot and other similar systems, producing System-on-a-Chip (SOC) based infrastructure that can be swapped out at will, meaning users no longer have to unrack a huge server or pull it out to fix an issue and making the technology cheap enough to almost be disposable.

Part of the Open Compute vision is to also support white-label brands, helping you build your own infrastructure from the ground up, thereby removing the vendor premium.

This road isn’t feasible for anyone except the largest of vendors. Supporting such peripheral technology will also be demanding and troubleshooting issues becomes that much more difficult, as there is no vendor to fall back on. A number of vendors – including HP, Dell, Quanta and many more – produce Open Compute servers in various configurations that are highly configurable and manageable, designed for one purpose: Open Compute nodes. This saves having to effectively roll your own compute design.

Next page: What's the appeal?

More about

TIP US OFF

Send us news


Other stories you might like