Original URL: https://www.theregister.com/2011/09/15/data_centre/

Place your data centre in a handy container

The future is modular

By Timothy Prickett Morgan

Posted in On-Prem, 15th September 2011 11:00 GMT

Data centres are a big capital expense. A 10,000 sq ft data centre designed to last 15 or 20 years costs about $33m, so you have to think about it a lot more carefully than you do about buying a server or a piece of software.

The churn is faster with IT gear and the power density is also increasing as companies try to cram more compute, storage and network capacity into scarce data centre space.

In the past decade, organisations have learned to think of their IT acquisitions at the rack level, rolling gear in and out of the data centre as complete, modular components of the IT infrastructure, usually on a three-year cycle. But the data centre is different.

Or at least it was until the advent of containerised, and now modularised, data centres. During the recession in the wake of the dot com boom, data centres built from standard 40ft metal shipping containers were championed by a few IT vendors and hyperscale data centre operators, as well as the military.

Tall and skinny

But although shipping containers might be perfect for remote data centres or hyperscale facilities that are accessed rarely, they are designed to fit on a single lane of a highway and are not on a human scale when filled with IT gear. They are too skinny and too tall, for starters.

That is why there are no more than 200 to 300 containerised data centres in the world using standard 20ft or 40ft containers, says Steve Sams, vice-president of site and facilities at IBM's global technology services division. He knows of 100 such data centres that Big Blue is involved with in some fashion.

According to Sams, containerised data centres are still useful for customers whose bricks-and-mortar data centres have run out of room and who have somewhere they can park some containers.

Customers who also need portability for their IT gear – such as the military or mining and oil and gas industries, which do a lot of local heavy processing – are also fans of containers.

IBM can put a five-container data centre loaded with IT gear on a C130 transport plane and get it to you in about three hours. And if you have a remote location, the container can be loaded up with gear and driven there and put on a slab.

Modular data centres, whether housed in a shipping container or not, cope with the disconnect between today’s plant needs and the limits of human ability to plan for the future.

The best-laid plans

Glenn Keels, director of marketing for the hyperscale business unit in the industry standard servers and software division at HP, says the average data centre now is 14 years old. That takes us back to 1997, when the dot com boom was taking off.

"Even the most genius CIO could not be expected to predict what the IT or business environment would be today based on what was happening back then," says Keels.

So the answer that 35 regional and global suppliers have come up with is to build modularity into data centres to adjust to changes in technology.

If the rack is the new server, then the new rack is a module, says Keels. In June HP rolled out its EcoPOD, a double-width containerised data centre built on a human scale.

"Customers don't want to have to be skinny to get in the hot aisle," says Keel. "We have an 8ft-wide hot aisle that you can park a car in."

The EcoPOD 240a can have two rows for 44 racks – 22 each side – yielding a total of 2,200U of aggregate rack space. That is enough for 4,400 server nodes if you use 2U tray servers with half-width server motherboards, putting four nodes in a chassis.

Those server nodes can have over 24,000 3.5in disk drives in aggregate. If you use the densest servers that HP sells, you can push the number of server nodes in an EcoPOD up to 7,040.

A typical bricks-and-mortar data centre built in the 1990s can deliver maybe 6KW to 8KW of power in a 42U rack; the 50U racks used in the EcoPOD can deliver up to 44KW of power and peaks out at 69KW.

HP EcoPOD

HP's EcoPOD data centre (click to enlarge)

Power usage effectiveness (PUE) is the ratio of the power consumed by the data centre divided by the power consumed by the IT gear when it is running. With direct expansion (DX) chillers installed to cool the air that is pumped down to the two cold aisles on the internal sides of the POD 240a, a fully loaded containerised data centre has a PUE rating of 1.15 to 1. Using outside air in temperature climates can push the PUE down to 1.05 – as good as anything Google, Yahoo! or Facebook can manage with their hoity-toity data centres.

We have the technology

IBM is happy to sell you a containerised data centre if you want one, but prefabricated modules might make a lot more sense for a lot of customers as capacity needs rise.

The dearth of crystal balls in IT departments is driving customers to modular data centres. In an IBM survey of data centre operators three years ago only 12 per cent were interested in modular data centres of any kind. In a similar survey conducted four months ago, more than 80 per cent of customers wanted them.

IBM SMSR

The three sizes of the IBM Scalable Modular Data Centre

IBM has installed more than 500 modular data centres worldwide so far, ranging from its Portable Modular Data Centre (PMDC), launched in December 2009 and based on standard shipping containers, to its Scalable Modular Server Room (SMSR), which looks like something you might buy at IKEA and cordons off a space in the warehouse or factory floor to create a data centre covering 500 to 1,000 sq ft.

Containerised data centres using the PMDC can cost 25 to 30 per cent less to design and build than a standard one. The SMSRs can cost about 15 per cent less than a small data centre and includes some security and fire suppression gear that a lot of small data centres don't have.

The SMSRs offer from 126U to 336U of rack capacity and from 8KW to 30KW of aggregate power draw. IBM does a lot of custom data centre design, too, including modular and containerised glass houses.

Uptake for containerised or modular data centres has been slower than you might expect, and not for lack of technology.

"This is a very local, mom-and-pop industry except for a few global players," Sams says. "So they can't spend millions of dollars investing in skills.

"The real inhibitor is skills in the marketplace, not people's desire to buy this."

Sams adds that customers are getting smart and designing their data centres to be modular for power and cooling as well rack capacity.

"If you do it that way, you don't have to pull it apart and rebuild it five or six years from now," he says.

About five to ten per cent of the cost of a data centre is taken up by the building – the bricks and mortar – with another 70 to 80 per cent going on the electrical and cooling systems. If you go modular, you can plan for a day when you will need to be able to draw 20KW per rack but only power and cool it for the 6KW per rack you need today.

So you can then defer 40 to 50 per cent of the upfront capital costs and also about half of the electricity and operational costs. These numbers add up fast, says Sams, citing the rule of thumb that over a 20-year lifespan the cost of running a data centre is three to five times the cost of designing and building it.

Building boom

After a three-year drought, data centre construction is picking up and this $45bn global industry is starting to feel more like its old pre-recession self.

Silicon Graphics, an early player in containerised data centres, wants to ride this data centre upgrade wave in parallel with the current server upgrade cycle.

Last year the company hired Patrick Yantz to be its senior director of modular data centre engineering. He was one of the designers of Microsoft's containerised data centres outside Chicago and the designer of Microsoft's air-cooled containerised data centre in Quincy, Washington.

"We have to have a way to deliver more computing more efficiently and more easily"

"This is a very young industry but it is maturing rapidly," says Yantz. "It really is a paradigm shift.

"The skin of the data centre is not what matters. People are looking at it differently now. They don't like shipping containers but they are thinking in terms of racks and hundreds of racks. Standardisation will happen over time, and it has to.

“In order for the Internet to expand, we have to have a way to deliver more computing more efficiently and more easily than we do right now."

Microsoft's Quincy data centre modularised not just the server racks, but the electrical, mechanical and security infrastructure so the data centre can be expanded in a linear fashion on all fronts.

It also used outside air to chill the data centre. The Quincy facility was the inspiration for SGI's Ice Cube Air, its third generation of modular data centre infrastructure.

SGI's Ice Cube Air Data centre

The SGI Ice Cube Air modular data centre

SGI has put the door in the side of the Ice Cube Air modular container so the aisles can be wider.

An 8ft model, which costs $99,000, can house four 51U racks of gear and support 148KW of input power with as much as 35KW per rack. It has fans and an evaporative cooling system that let the IT gear run with outside air cooling in most climates and yield a PUE of 1.06.

You can link four of these together as a single module and support 8,160 1U servers or 28.7PB of SGI's InfiniteStorage arrays in this quad-module data centre.

There is also a 10ft model of the Ice Cube Air that offers up to 40 racks and 371KW of power if you quad it up, and the largest model is based on 20ft modules.

The advent of modular data centres is causing organisations to rethink where they put their data centre. Or, in many cases, data centres.

"Now that you have opened up the floodgates, executives are thinking about putting data centres where the power and cooling is cheapest," says Yantz.

Just like Microsoft, Google, Yahoo! and Facebook. ®