Feeds

TPC starts designing server virt test

Not a partition-buster benchmark

Internet Security Threat Report 2014

The server virtualization wave might have crested by the time the Transaction Processing Council and its vendor members get a virtualization benchmark into the field, but the TPC has to be given credit for coming up with a useful test to gauge how virtualized environments perform and scale and what kind of bang for the buck they offer.

That's the plan with the formation of the TPC-Virtualization workgroup, which was formed in December 2009 to try to come up with an enterprise-class workload that tests the scalability of databases and their applications in a virtualized environment.

While companies have been keen on virtualizing basic infrastructure workloads - print, file,wWeb, and maybe even application serving - the overhead that comes with virtualizing I/O (networking and disks) in prior generations of x64 chips and virtual machine hypervisors has made them wary of virtualizing database workloads. But now that x64 chips from Intel and Advanced Micro Devices have features to help virtualize I/O (which means it doesn't have to be done in software and is therefore more efficient), the time has come to do a proper virtual database test.

"There is strong demand for a database benchmark in a virtual environment," says Raghunath Nambiar, a performance guru formerly at Hewlett-Packard and now at Cisco Systems. Nambiar is general chair of TPC Technology Conference, which will be hosted in Singapore on September 17 with all of the IT nerds afflicted by performance anxiety of the server and systems software variety. He is also involved in the development of the TPC-Virtualization benchmark test.

The TPC-Virtualization work group was formed in the wake of last year's TPCTC event, and this was done because all of the server makers and hypervisor sellers who want to peddle virtualized products figured out that IT managers and system admins want some hard numbers to compare different virtualization techniques when it comes to running real workloads.

The TPC-Virtualization test will be roughly based on the existing TPC-E test, an online transaction processing workload that simulates the data processing related to running web-based and online stock trading systems.

Nambiar says you will be disappointed if you think the simple thing would be to run TPC-E on bare metal and then atop a bunch of hypervisors in virtual mode on the same servers, so customers can figure out the overhead of running in virtual mode. The TPC-Virtualization test, presumably to be called TPC-V for short, will have enough differences compared to TPC-E so bare metal and virtualized comparisons won't be possible.

Welcome to the wonderful world of vendor consortiums.

The TPC-Virtualization folks are keen on doing something better than the VMark test put forward by x64 server virtualization juggernaut VMware. Something where the workloads and virtual machine partitions themselves scale dynamically instead of increasing the number of static partitions onto a machine until it chokes.

This practice is called tiling, and it does not reflect how workloads - particularly back-end systems like database-driven transaction processing systems - are used and scaled in the real world. Unless you are running a clustered database, like Oracle's RAC on Exadata or IBM's PureScale on Power Systems, when you need to scale an application you build up the back-end database server with a bigger SMP. And any virtualization benchmark that stresses test databases needs to do the same, expanding the underlying guest partition and its allocation of CPU, memory, and I/O as the workload expands.

A virtualization benchmark also has to allow larger systems to support larger numbers of guest partitions, which also happens out there in the real world. This is what the tiling approach tries to do, what Nambiar referred to as a "partition-buster benchmark," which he said the TPC-Virtualization test will most certainly not focus on.

What the TPC-Virtualization test will have is elasticity, meaning that workloads will expand and contract as the test runs, compelling server and virtualization vendors to demonstrate the dynamic capabilities of their systems.

Nambiar says that the TPC-Virtualization working group is hoping to get a draft specification together by June, and then vendors that participate in the TPC will start prototyping a test for six months or so. Then comes the long ratification process as server and database makers haggle to tweak the test here and there.

Nambiar says the benchmark will take from one to two years to be finalized, and that he is "hoping for a 2011 launch". He adds that TPC is well aware that it takes far too long to get benchmarks into the field and says that if this one comes to the field in one to two years, this will greatly speed up the process.

It is so much easier when a vendor controls the software and the benchmark, as VMware does with VMark. But then again, VMark is of limited value because it doesn't allow for comparisons across server architectures and hypervisor architectures, and it doesn't have pricing metrics as all the TPC tests do. ®

Internet Security Threat Report 2014

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.