Original URL: https://www.theregister.com/2011/04/07/facebook_data_center_unveiled/

Facebook 'open sources' custom server and data center designs

The last rule of Google 'Fight Club'

By Cade Metz

Posted in SaaS, 7th April 2011 21:29 GMT

Facebook has "open sourced" the specifications and design documents for the custom-built servers, racks, and other equipment used in its new Prineville, Oregon data center, the first data center designed, built, and owned by the company itself.

On Thursday morning, at Facebook headquarters in Palo Alto, California, the social-networking giant released the designs under the aegis of the Open Compute Project, an effort to encourage the big industry players to share - and collectively improve on - hardware designs suited to a massive online operation like Facebook. The move is in stark contrast to Google, Facebook's primary rival, which is famously secretive about its latest data center and server designs.

"It's time to stop treating data center design like Fight Club and demystify the way these things are built," said Jonathan Heiliger, vice president of technical operations at Facebook.

Facebook CEO Mark Zuckerberg said that by sharing the designs, the company hopes to not only to foster collaboration on data-center design across the industry, but also to drive down the prices of the sort of back-end equipment it's using in Prineville.

"We want server design and data-center design to be something people can jointly collaborate on," Zuckerberg said. "We're trying to foster this ecosystem where developers can easily build startups, and by sharing this we think it's going to make this ecosystem more efficiently grow. We're not the only ones who need the kind of hardware we're building out. By sharing this we think there is going to be more demand for the type of stuff that we need, which should drive up the efficiency and scale of development, and make it more cost-effective."

Facebook data center - exterior

The Facebook Data Center – with Vaseline on the lens (click to enlarge)

As far back as the spring of 2009, Heiliger publicly complained that the big chip and server makers weren't providing the sort of hardware needed to operate an epic web operation along the lines of Facebook. So the company has built its own gear. But, unlike Google, it prefers not to treat these designs as a competitive advantage it can wield against the competition, and it has extended this philosophy to its entire data center.

According to Facebook, anyone can use the designs without paying licensing fees. Over the past 18 months, Facebook worked in tandem with 10 to 15 partners on the designs, but the company said that all partners have agreed to share all the IP in the designs as part of their participation in the Open Compute Project.

According to Heiliger, Facebook's new data center has a PUE (power usage effectiveness) of 1.07, significantly better than the industry average, which is about 1.5. "We started this project with two goals in mind: having the most efficient compute and the best economics possible," he said.

The new server chassis, motherboard, rack, and electrical designs, he said, were put together by a team of three people. The servers were manufactured by Quanta, a Taiwanese operation that now manufactures more notebooks than anyone else on earth. Heiliger said that Facebook's new Quanta-built servers are 13 per cent more efficient than the machines it was using previously.

Facebook is a longtime customer of Dell's Data Center Services division - a Dell operation that helps organizations build their own server and other hardware designs - but it appears Dell has been cut out of the Prineville project.

Dell, however, was present at today's event, and it intends to use Facebook's "open sourced" designs when building hardware for other customers. It also appears that Dell is still doing work for Facebook in other areas.

The Penthouse

According to Jay Park, Facebook's director of data-center design, the company chose Prineville for its new facility because the rural Oregon town had the necessary networking and power infrastructure as well as the appropriate climate for efficiently cooling the facility. "We can maximize the free cooling," he said.

On one level, the data center is designed to more efficiently deliver power to its servers. Typically, Park said, there is a power loss of between 11 and 17 per cent when you transfer power all the way to a data center's servers, but the Prineville center takes this figure down to 2 per cent, thanks to the use of a single transformer rather than the four-transformer setup used in the typical data center.

The system does away with a central UPS. For every six racks of servers, there's a single 48 volt DC UPS integrated with a 277 volt AC server power supply. "We eliminated some single points of failure, so we actually improved reliability by up to six times," Park said, adding that he dreamed up the facility's electrical design in the middle of the night, and with no paper available, he sketched it out on a napkin.

At the facility, outside air comes through a grill in a "penthouse" at the top of the data center, where equipment is used to remove water from the air. If the air is too cold, it will actually be mixed with hot air from, well, the data center's servers. The outside air is then pushed down to the data center. Park said the temperature here will range from 65 to 80 degrees Fahrenheit, and humidity will range from 45 to 60 per cent.

Facebook data center - AMD motherboard

The AMD version of the mother of all social-networking motherboards (click to enlarge)

As Facebook has said in the past, Park also indicated that the facility will use heat from the servers to heat up its built-in office space. There are no chillers. But there is an system that provides additional cooling with evaporated water.

Facebook's Amir Micheal, part of the company's hardware-design team, described the Prinevlle servers as "vanity-free". Michael said that Facebook removed "all the plastic bezels" and "almost all the screws" and anything else that "didn't make sense". The chassis is taller than the standard server - 1.5 U - and this let the company use taller heat sinks. Offering more surface area, he said, they're more efficient when cooling components. This, in turn, means that Facebook needn't force as much air onto the servers.

But the design uses larger fans as well, because, Michael says, these are more efficient as well. The fans measure about 60mm. The servers are also include snaps and spring-loaded plungers designed to make it easier for technicians to remove and replace parts.

Facebook has built both AMD and Intel motherboards, both manufactured by Quanta. As with the chassis, Michael and crew sought to remove as many components as possible, including expansion slots and other connectors. According to Michael, the voltage regulators on the motherboard achieve 93 per cent efficiency. The entire system weighs six pounds less than the traditional IU server, Michael said.

There are two connectors to the power bricks on each server, one for the 277 volt input and another for the 48 volt battery backup system. The entire motherboard, Michael said, achieves 94 per cent efficiency.

The company has also built its own rack, known as a "triplet rack," housing three columns of thirty servers. That's total of 90 servers per rack. Servers are mounted on shelves rather than rails. There's a battery cabinet in each rack for backup power.

According to Heiliger, the data center is 38 per cent more efficient than Facebook's existing leased data centers, but the cost is about 20 per cent less. The company began testing the data center at the end of last year, Heiliger tells The Reg, and it began taking live traffic over the past month.

Facebook data center - server racks

Rack up the pokes! (click to enlarge)

Facebook broke ground on the Prineville data center in January 2010. Previously, the company leased data-center space from third parties. At the time of the groundbreaking, Facebook said it would use outside air and evaporated water to cool the facility, rather than depend on chillers. About 60 to 70 per cent of the time outside air will be sufficient, the company then, but during the warmer and more humid days of the year, an "evaporative cooling system" will kick in. Heiliger told us at Thursday's event that outside cooling could potentially happen year-round.

The chill in Mountain View

Google is currently operating a chiller-less data center in Belgium, and Microsoft is building one in Ireland. But these are a tad different. Microsoft is using Direct eXpansion (DX) cooling – similar to traditional air conditioning – while it seems that Google uses a software system that automatically shifts loads to other data centers when outside temperature get too high. The system is called Spanner, and though Google has been coy about its use of Spanner, the company has publicly presented a paper on the platform.

Facebook is building a second custom-built data center in western North Carolina, where local tax breaks have made it data-center hot spot housing several big names, including Google and Apple. The weather in North Carolina is less temperate, so the company may have to make changes to its cooling systems. And Heiliger told us that the company is already making changes to its hardware designs for use in the North Carolina facility.

There have been rumblings that Facebook would switch to ARM servers or other "massively multi-core" server designs, but Heiliger indicated to us that there are no definite plans to do so. But he did say that the company is always evaluating new designs.

Facebook data center - interior, lit up

Well, the racks are modular...(click to enlarge)

The company is not using the sort of modular data center design popularized by Google and picked up by the likes of Microsoft. Google has long used such designs, and it has long built its own servers. The company did reveal some of its designs in the spring of 2009, but this was years after the fact – and these were apparently not its latest designs.

Heiliger's "Fight Club" line was surely aimed at Google. When we asked Heiliger about Facebook's decision to release its server and data-center designs, he equated the decision to open sourcing back-end software, an area in which Facebook is also putting Google to a certain amount of shame.

"We think the bigger value comes back to us over time," he told us, "just as it did with open source software. Many people will now be looking at our designs. This is a 1.0. We hope this will accelerate what everyone is doing." ®