This article is more than 1 year old

Facebook 'open sources' custom server and data center designs

The last rule of Google 'Fight Club'

The Penthouse

According to Jay Park, Facebook's director of data-center design, the company chose Prineville for its new facility because the rural Oregon town had the necessary networking and power infrastructure as well as the appropriate climate for efficiently cooling the facility. "We can maximize the free cooling," he said.

On one level, the data center is designed to more efficiently deliver power to its servers. Typically, Park said, there is a power loss of between 11 and 17 per cent when you transfer power all the way to a data center's servers, but the Prineville center takes this figure down to 2 per cent, thanks to the use of a single transformer rather than the four-transformer setup used in the typical data center.

The system does away with a central UPS. For every six racks of servers, there's a single 48 volt DC UPS integrated with a 277 volt AC server power supply. "We eliminated some single points of failure, so we actually improved reliability by up to six times," Park said, adding that he dreamed up the facility's electrical design in the middle of the night, and with no paper available, he sketched it out on a napkin.

At the facility, outside air comes through a grill in a "penthouse" at the top of the data center, where equipment is used to remove water from the air. If the air is too cold, it will actually be mixed with hot air from, well, the data center's servers. The outside air is then pushed down to the data center. Park said the temperature here will range from 65 to 80 degrees Fahrenheit, and humidity will range from 45 to 60 per cent.

Facebook data center - AMD motherboard

The AMD version of the mother of all social-networking motherboards (click to enlarge)

As Facebook has said in the past, Park also indicated that the facility will use heat from the servers to heat up its built-in office space. There are no chillers. But there is an system that provides additional cooling with evaporated water.

Facebook's Amir Micheal, part of the company's hardware-design team, described the Prinevlle servers as "vanity-free". Michael said that Facebook removed "all the plastic bezels" and "almost all the screws" and anything else that "didn't make sense". The chassis is taller than the standard server - 1.5 U - and this let the company use taller heat sinks. Offering more surface area, he said, they're more efficient when cooling components. This, in turn, means that Facebook needn't force as much air onto the servers.

But the design uses larger fans as well, because, Michael says, these are more efficient as well. The fans measure about 60mm. The servers are also include snaps and spring-loaded plungers designed to make it easier for technicians to remove and replace parts.

Facebook has built both AMD and Intel motherboards, both manufactured by Quanta. As with the chassis, Michael and crew sought to remove as many components as possible, including expansion slots and other connectors. According to Michael, the voltage regulators on the motherboard achieve 93 per cent efficiency. The entire system weighs six pounds less than the traditional IU server, Michael said.

There are two connectors to the power bricks on each server, one for the 277 volt input and another for the 48 volt battery backup system. The entire motherboard, Michael said, achieves 94 per cent efficiency.

The company has also built its own rack, known as a "triplet rack," housing three columns of thirty servers. That's total of 90 servers per rack. Servers are mounted on shelves rather than rails. There's a battery cabinet in each rack for backup power.

According to Heiliger, the data center is 38 per cent more efficient than Facebook's existing leased data centers, but the cost is about 20 per cent less. The company began testing the data center at the end of last year, Heiliger tells The Reg, and it began taking live traffic over the past month.

Facebook data center - server racks

Rack up the pokes! (click to enlarge)

Facebook broke ground on the Prineville data center in January 2010. Previously, the company leased data-center space from third parties. At the time of the groundbreaking, Facebook said it would use outside air and evaporated water to cool the facility, rather than depend on chillers. About 60 to 70 per cent of the time outside air will be sufficient, the company then, but during the warmer and more humid days of the year, an "evaporative cooling system" will kick in. Heiliger told us at Thursday's event that outside cooling could potentially happen year-round.

More about

TIP US OFF

Send us news


Other stories you might like