Feeds

Virtualization and HPC - Will they ever marry?

Imaginary-server overhead

Top 5 reasons to deploy VMware with Tegile

SC08 Server virtualization has spent the past several decades moving out from the mainframe to Unix boxes and then out into the wild racks of x64 servers running Windows, Linux, and a smattering of other operating systems in the corporate data center. The one place where virtualization hasn't taken off is in high performance computing (HPC) clusters.

And for good reason. But as hardware costs continue to plummet, making hundreds of teraflops of raw computing power in a parallel x64 server cluster available to even medium-sized businesses, startups, academic institutions, research facilities, and other places where HPC clusters end up - and at a relatively modest price - the system administration demands on HPC labs and the desire for more flexibility may possibly - and I mean possibly - see the adoption of server virtualization technologies in this subsegment of the server space.

Roughly speaking, HPC clusters account for about a fifth of the shipments of x64 server boxes each quarter. And according to IDC, in 2007, HPC boxes of all types - including vector, cluster, and other types of gear - accounted for $10.1bn in sales (revised downward from an initial $11.6bn estimate that came out in March of this year). That gives HPC an 18.6 per cent take of the $54.4bn in server sales in 2007, again about a fifth of the piece.

But the interesting bit is that if you take HPC machines out of the picture, general-purpose sales would be nearly flat for 2007. And equally importantly, if you remove the HPC boxes from the mix, then the adoption rate on new server sales for virtualization would be a little bit higher than the broader market stats cited by Gartner and IDC.

HPC customers, as a rule, do not use server virtualization because of the overhead this software imposes. The benchmark tests that server virtualization vendors such as VMware are beginning to use - I am thinking here of VMark, but also the two-year-old SPEC virtualization benchmark effort that has yet to bear fruit - do not show the overhead their hypervisors impose.

But as the x64 platform got virtualization hypervisors a number of years ago, the performance penalty was as high as 50 per cent on some workloads, and even after hardware features to support virtualization have been added to x64 chips from Intel and Advanced Micro Devices, the overhead is widely believed to be in the range of 10, 15, or 20 per cent. But seeing as though there are no independently available tests, customers really have to do their own benchmarks. And by the way, the terms of the ESX Server licensing agreement from VMware apparently do not allow people to publish the results of benchmark tests.

Secure remote control for conventional and virtual desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?