Feeds

TPC starts designing server virt test

Not a partition-buster benchmark

Boost IT visibility and business value

The server virtualization wave might have crested by the time the Transaction Processing Council and its vendor members get a virtualization benchmark into the field, but the TPC has to be given credit for coming up with a useful test to gauge how virtualized environments perform and scale and what kind of bang for the buck they offer.

That's the plan with the formation of the TPC-Virtualization workgroup, which was formed in December 2009 to try to come up with an enterprise-class workload that tests the scalability of databases and their applications in a virtualized environment.

While companies have been keen on virtualizing basic infrastructure workloads - print, file,wWeb, and maybe even application serving - the overhead that comes with virtualizing I/O (networking and disks) in prior generations of x64 chips and virtual machine hypervisors has made them wary of virtualizing database workloads. But now that x64 chips from Intel and Advanced Micro Devices have features to help virtualize I/O (which means it doesn't have to be done in software and is therefore more efficient), the time has come to do a proper virtual database test.

"There is strong demand for a database benchmark in a virtual environment," says Raghunath Nambiar, a performance guru formerly at Hewlett-Packard and now at Cisco Systems. Nambiar is general chair of TPC Technology Conference, which will be hosted in Singapore on September 17 with all of the IT nerds afflicted by performance anxiety of the server and systems software variety. He is also involved in the development of the TPC-Virtualization benchmark test.

The TPC-Virtualization work group was formed in the wake of last year's TPCTC event, and this was done because all of the server makers and hypervisor sellers who want to peddle virtualized products figured out that IT managers and system admins want some hard numbers to compare different virtualization techniques when it comes to running real workloads.

The TPC-Virtualization test will be roughly based on the existing TPC-E test, an online transaction processing workload that simulates the data processing related to running web-based and online stock trading systems.

Nambiar says you will be disappointed if you think the simple thing would be to run TPC-E on bare metal and then atop a bunch of hypervisors in virtual mode on the same servers, so customers can figure out the overhead of running in virtual mode. The TPC-Virtualization test, presumably to be called TPC-V for short, will have enough differences compared to TPC-E so bare metal and virtualized comparisons won't be possible.

Welcome to the wonderful world of vendor consortiums.

The TPC-Virtualization folks are keen on doing something better than the VMark test put forward by x64 server virtualization juggernaut VMware. Something where the workloads and virtual machine partitions themselves scale dynamically instead of increasing the number of static partitions onto a machine until it chokes.

This practice is called tiling, and it does not reflect how workloads - particularly back-end systems like database-driven transaction processing systems - are used and scaled in the real world. Unless you are running a clustered database, like Oracle's RAC on Exadata or IBM's PureScale on Power Systems, when you need to scale an application you build up the back-end database server with a bigger SMP. And any virtualization benchmark that stresses test databases needs to do the same, expanding the underlying guest partition and its allocation of CPU, memory, and I/O as the workload expands.

A virtualization benchmark also has to allow larger systems to support larger numbers of guest partitions, which also happens out there in the real world. This is what the tiling approach tries to do, what Nambiar referred to as a "partition-buster benchmark," which he said the TPC-Virtualization test will most certainly not focus on.

What the TPC-Virtualization test will have is elasticity, meaning that workloads will expand and contract as the test runs, compelling server and virtualization vendors to demonstrate the dynamic capabilities of their systems.

Nambiar says that the TPC-Virtualization working group is hoping to get a draft specification together by June, and then vendors that participate in the TPC will start prototyping a test for six months or so. Then comes the long ratification process as server and database makers haggle to tweak the test here and there.

Nambiar says the benchmark will take from one to two years to be finalized, and that he is "hoping for a 2011 launch". He adds that TPC is well aware that it takes far too long to get benchmarks into the field and says that if this one comes to the field in one to two years, this will greatly speed up the process.

It is so much easier when a vendor controls the software and the benchmark, as VMware does with VMark. But then again, VMark is of limited value because it doesn't allow for comparisons across server architectures and hypervisor architectures, and it doesn't have pricing metrics as all the TPC tests do. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.