Original URL: https://www.theregister.com/2009/04/24/ibm_nehalem_sw_vpu/

IBMware priced 40% higher on Nehalem

More power to you. More money to Big Blue

By Timothy Prickett Morgan

Posted in Channel, 24th April 2009 04:19 GMT

If you are thinking about running IBM's systems, database, or middleware software on one of Intel's new "Nehalem EP" Xeon 3500 or 5000 series chips, brace yourself for some price increases.

For nearly three years, IBM has been selling selected software from its Software Group on various server platforms on a quasi-performance-related pricing scheme based on something called Processor Value Units, which might as well be called "We Just Guessed" considering how much a PVU has to do with the actual performance of the processors upon which the software is supposedly priced.

This week, IBM announced PVU ratings for the new quad-core Xeon 3500 processors, which are put into single-socket boxes, as well as on the Xeon 5500s, which are low-speed dual-core and higher-speed quad-core chips designed for two-socket machines. As we have been reporting, the performance improvement for servers using Nehalem EP chips can be as high as twice that as the quad-core Xeon 5400 processors they replace - provided you do some application tuning and recompiling - and can rival the performance of four-socket server using Intel's six-core "Dunnington" Xeon 7400 processors.

Given all this, you would think that IBM's PVU rating for the Nehalem EPs would be twice that of the Xeon 5400s. Ah, but it's not that simple. In some ways, what IBM has done is more fair than that and, in others, less fair.

The point behind the PVU scheme is to lump processor types and families together and give them a single rating so IBM's Software Group sales people don't have to resort to performance benchmarks to price the company's software. Simplification is good for sales people. And thus far, it has been good for companies using Intel's Xeon x64 chips as well as Advanced Micro Devices' Opteron equivalents.

<p.All Xeons and all Opterons were rated at 50 PVUs per core, regardless of core speed. That is certainly attractive compared to IBM's dual-core Power6 processors used in midrange and high-end AIX, Linux, and i platforms or the quad-core z10 processors, which are rated at 120 PVUs per core. But with the Nehalem EPs, the PVU ratings have been kicked up to 70 per core, and that represents a 40 per cent increase in software costs for customers that migrate software from earlier Xeons or Opterons to the new Nehalem EPs.

PVU on the table

The latest and greatest PVU table put out by IBM is here, and you can see how the pricing is not really based on the rough performance of the chips any more, even if that was the intent:

IBM PVU Table

IBM's PVU ratings for server processors (Click to Enlarge)

If IBM wanted to be more fair, it would rate the Nehalem EP chips at around 100 PVUs, about twice that of the Xeon 5400 parts. But that can't even come close to fixing the issues with the PVU pricing scheme, since within each processor category there is a wide variety of clock speeds and performance. Basically, this PVU scheme has the right intentions in terms of simplification, but it makes little more sense than charging by system or by processor socket or by the clock speed of the processors running in the box. You can argue this a million different ways. But there is a way to do real PVUs, and IBM knows full well how to do it.

What IBM ought to do is establish a rating system across all processor types and speeds and price based on this. Each and every mainframe has a benchmark number called a Large Systems Performance Reference (LSPR) rating, which the IT consultancies and IBM use to convert back to the ever-popular MIPS (millions of instructions per second) rating used by mainframe shops. IBM also has what was once a more precise software pricing metric, called Metered Service Units (MSUs), which it uses to price mainframe software that is aligned more or less to the capacity that companies. Every possible clock speed combination on mainframes has a different rating.

On Power-based servers, IBM has the Commercial Performance Workload (CPW) benchmark for OS/400 and i workloads and the Relative Performance (rPerf) benchmark for AIX workloads. Both are loosely based on the TPC-C online transaction processing benchmark test, with the I/O requirements curtailed and coded to stress the processors and memory. Each and every possible clock speed and processor core count in a Power Systems box has a different CPW or rPerf rating.

Even though IBM has fine-grained performance data (which is obviously limited to a single workload and not necessarily representative for a particular customer, I know), for decades IBM assigned machines into software tiers that often made about as much sense as PVUs. And when multicore processors became the norm in the 2000s, IBM switched to per-core pricing for operating systems, which is simple and which benefits those who keep moving to faster processors.

The point is, mainframe and Power Systems shops are concerned is that they know how to reckon the relative performance of the machine they have, which might be one, two, three, or four years old of a given configuration, against the machine they want to move to, which would presumably have more capacity. You just subtract the ratings from each other and there's your incremental performance gain on which to calculate whatever incremental software fees are due. And if there is no gain, then there is no increase. And if customers virtualize and use only a portion of a machine to run the IBM software, then the price should - gasp! - go down.

To be fair, what IBM needs to do is have a single CPW-style benchmark that spans all machines, and base PVUs on that. But it is far easier and less costly to pull the numbers out of its Big Blue, er, hat. ®