This article is more than 1 year old

Data centres gripped by power struggle

Waste not, want not,

Cloud Data centre efficiency is a constant struggle. Choosing the right system for your required tasks is one challenge. Upgrade cycles cause power and heat issues. Existing techniques are continually refined. Combined with emerging technologies, optimal efficiency is a moving target.

Space constraints are where the challenges of data centre efficiency begin. The server-on-the-table begets the single rack server closet. The server closet grows, moving through server room into full on data centre. Each stage of expansion is preceded by at least one compute density upgrade cycle.

Replacing individual systems with blade servers is an immediate density increase. Virtualisation allows the collapse of isolated systems into a single host while physicalisation involves moving low demand services to dedicated low power servers. For some, a baby super is the way to go.

Density matters

Increased compute density brings power issues. Few buildings are designed to supply the power densities required by modern IT. Careful building selection and service upgrades are typically required. Power costs, availability and generation source are important. Power is the drive behind internal efficiency initiatives straight through to geo-locative selection.

Equipment selection is critical. Dedicated cryptoprocessors benefit those embracing encryption. GPUs see use for heavy number crunching, with dedicated ASIC usage gaining traction. Storage presents another concern; spinning disk, flash cache, all-SSD, MAID and even tape have their places. Each has a different performance/efficiency/TCO profile.

Power utilisation and loss is a concern extending beyond servers and storage. Microsoft is an excellent case study; 16 per cent of power is lost to transmission and conversion, another 14 per cent is lost to internal distribution and cooling. Microsoft’s research tells us AC power distribution is more efficient at high loads, DC at low loads. Low power processors offer greater data centre efficiency; despite the up front cost premium they have a lower TCO.

Chiller thriller

Cooling the data centre is a problem with several solutions. With the right climate, innovative outside air cooling is a good substitute. You can use the ocean for cooling; if you really like then build a data centre navy. If you happen to have a global network of data centres, you can move workloads in response to temperature excursions.

"After all, data centre overhauls are expensive"

Those of lesser means find their choice of cooling solution highly geocentric. Power hungry traditional chillers approach uselessness in high-humidity environments. Air cooling is impractical in high temperatures. Geothermal is a possibility, as is immersion. Isolated hot and cold rows will increase the efficiency of any cooling solution you choose.

Scale makes a difference. A five-rack server room typically has to make the best of whatever space is already available. A company running five hundred thousand racks has the resources to choose the location of their data centre and efficiency tools to match.

Total cost of ownership is the driver of data centre operations; when you hit the compute density wall of your existing plant, the smaller operator should consider alternatives. Co-location of servers is one method of tapping into the economy of data centre scale. Moving some applications into the hosted cloud is another. After all, data centre overhauls are expensive. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like