Feeds

Supercomputers need standard shot glass to measure out juice

Can’t fix it unless you can quantify it

Remote control for virtualized desktops

Webcast The biggest challenge in getting to the next level of supercomputer performance – Exascale – is the massive amounts of electricity these systems will consume. On a smaller scale, energy consumption also inhibits HPC installations. The problem isn’t just getting enough plugs from your walls to the grid; it’s also the cost of electricity when you’re guzzling it in such massive quantities.

Regardless of where you live or the deal you’ve cut with your local utility, megawatts of power cost mega-dough. Here in the hydropower-rich Pacific Northwest, commercial customers pay around 10 cents per kilowatt hour, and industrial users pay about 6.5 cents for the same juice (although that’s an ‘interruptible’ rate – which is probably a deal-breaker for HPC installations). At a dime per KWh ($100 per MWh), the annual cost per megawatt comes in at $876,000.

The average energy consumption of the top 10 systems on the Top500 list is 4.8 megawatts, meaning an average bill of around $4.2 million. The K computer, at the top of the list, consumes 12.6 megawatts, which would cost more than $11m per year if it was relocated somewhere near my house.

The point is that big energy usage means huge costs. The industry is well aware of this, of course, and is intent on designing processors, I/O, storage and other components that provide higher flops per watt. But verifying and quantifying these gains accurately is a problem at both the data centre and individual system level.

We know how to measure energy consumption; it’s not rocket science, even when measuring the consumption of systems that actually do rocket science. The problem is two-fold. First, there aren’t enough organisations measuring their real-world energy consumption. Second, there are multiple ways to measure juice use – methods that vary in scope of measurement and also accuracy.

Enter the Energy Efficient High Performance Computing Group (EE HPC WG). They’ve pulled together a set of industry players ranging from very large HPC installations like the US-based Lawrence Berkeley National Lab, industry trackers like the Top500, Green500 and GreenGrid folks, along with reps from the vendor community – all with the goal of figuring out the best way to measure IT energy consumption.

In the webcast we talk with Natalie Bates, chairperson of the EE HPC WG, and Erich Strohmaier, a co-author of the Top500 and head of Future Technologies at Lawrence Berkeley National Lab, about the progress their group has made toward providing the industry with an energy measurement blueprint. It’s a thoughtful and interesting conversation and a good preview for what’s coming down the road.

Watch Video

®

Beginner's guide to SSL certificates

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.