Watch this, Apple. Fruity firm gets down and dirty with Facebook’s OCP

Data volume growing from iPhones, iPads, iTunes and … er ... watches

Apple's new data center, credit Apple

Luxury watch maker Apple has thrown its lot in with Facebook’s project for an “industry standard” data center.

The Open Compute Project's chairman, Frank Frankovsky, dropped the news at the OCP’s conference in San Jose, California

Apple is understood to have been working with OCP for some months. There were no further details.

Apple joins server-room giants including Hewlett-Packard, which announced its OCP-compatible Cloudline servers at the San Jose event. Networks giant Juniper also came out as an OCP member at the conference.

The pair join such names as Cisco who joined in October 2014, and IBM and Microsoft who got on board earlier last year.

Why should the maker of pimped-up timepieces get dirty in the data centre?

Apple operates four primary data centres – Maiden, North Carolina, Prineville in Oregon, Newark in California and Cupertino.

Facilities run to hundreds of thousands of square feet and have cost tens of millions of dollars to develop. However, Apple’s in the process of rolling out more having recently announcing plans to break ground on a pair of green-powered facilities worth €1.7bn ($1.8bn) in Ireland and Denmark.

Apple’s data centres hold an ever-expanding amount of data from iPhones, iPads, iTunes, the Apple cloud and – soon – watches.

Facebook started the Open Compute Project in 2011, to share data center designs and make systems and interfaces more open and interchangeable.

The status-sharing site initiated things by sharing the specs on the design of a brand-new Prineville, Oregon, data center, designed from scratch.

Facebook stripped out features in servers that are a drag on efficiency, is re-using warm air, and eliminated a central uninterruptible power supply.

It claims to have cut energy consumption by 38 per cent and is running at 24 per cent of the cost of Facebook’s existing data facilities.

Open Compute is producing specifications and designs in networking, server design, racks, hardware management and certification.

With a world populated by legacy server rooms running non-standard gear from all kinds of manufacturers, it’s hard to see what real impact OCP has had or will have on the data centre equipment supplied outside the web tier of giants, who are the only ones in a position to build hyper-scale facilities at scale working from a blank canvas.

Right now, it’s difficult to see how OCP is little more than a club of companies keeping a hand in the game so as not to get caught out, but not making real or substantial changes as to how their gear is built. They are serving one mega market, hyper-scale web tier, with the lines between supplier and cloud provider increasingly blurred as vendors roll data centers to float clouds and gobble our data.

Talking to The Register, the exec in charge of HP’s Cloudline machines reckoned OCP is, though, having a broader effect.

“It’s in the mentality of how servers are designed and built for services for large clouds and the service provider market,” Cloudline general manager Dave Peterson said.

“Yes, OCP has published standards and they are good and adoption is coming, but the bigger benefit is how OCP is driving the through process behind building-out computer storage, networking, management and other things."

“It’s how you think about it that’s different to a couple of years ago,” Peterson claimed.

Apple seems to want to join IBM and Microsoft in shaping the supply of the gear going into its new facilities, to make them easier and cheaper to build and run. ®




Biting the hand that feeds IT © 1998–2019