Feeds

Virtualization and HPC - Will they ever marry?

Imaginary-server overhead

Top 5 reasons to deploy VMware with Tegile

No time for the virual

HPC workloads are more driven by memory bandwidth, I/O bandwidth, and clock cycles than the typical infrastructure workloads out there in the data center and are therefore not as readily virtualizable. To put it bluntly, HPC labs have enough worries about wringing performance out of their machines and about getting more parallelism into their codes to better take advantage of the increasing number of cores they have in a cluster. They can't deal with virtualization, too.

Virtualization has been a boon to infrastructure servers that were underutilized. A typical x64 server running Web, print, file, and other workloads might run with maybe 5, 10, or 15 per cent of their CPU cycles being used utilized on average. (There are always peaks that spike above that). Hypervisors allow four or five server instances to be crammed onto one single physical server, with the added bonus these days of faster server provisioning and disaster recovery to boot. But in HPC clusters, CPUs are running at near their peaks all the time they are doing work.

But, having said that, system administration is an issue for clusters, just like it is for other servers, and people cost more money than software and iron. Setting up and configuring nodes in the cluster is a pain, and virtualization can help. Think about Amazon's Elastic Compute Cloud utility computing setup, which runs atop a tweaked version of the open source Xen hypervisor.

While this EC2 capacity is available on the cheap, it runs in a virtualized mode, and you could argue that one of the reasons it is so cheap is because it is virtualized and hence flexible. It is possible that HPC shops wanting to run distinct applications on different flavors of Linux or a mix of Linux and Windows will use hypervisors allow for this configuring and reconfiguring more easily. But plenty of people are skeptical of the idea.

"The biggest use of virtualization is to allow multiple applications to run protected," explains David Scott, petascale product line architect in Intel's HPC platform unit. "This is potentially an area. Customers are thinking about it, but no one has done it yet."

And the reason why virtualization has not been used in HPC shops is the same one that made server virtualization in data centers take off slowly: server hugging. "The idea of giving a piece of a processor to someone else is completely alien to HPC people," says Scott.

And that is why, for now, server virtualization and HPC will probably remain oil and water - at least as long as there are graduate students and scientists to man the clusters for free or nearly so. Then again, if you shake up things enough, you can get oil and water to make a suspension. Maybe HPC's salad dressing days are ahead. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?