Feeds

TPC slaps Oracle on benchmark claims

Watchdogs to Larry: 'You lie!'

Boost IT visibility and business value

Oracle let its marketing mouth get ahead of its brain with the Exadata 2 cluster system. Today, the Transaction Processing Council, which administers the TPC family of transaction and data warehouse processing benchmarks, slapped Oracle with a fine and a muzzle order relating to claims it has been making in advertisements about how its iron stacks up against alternatives such as IBM's Power 595 behemoth.

The TPC is quite picky about the rules for citing benchmark results, and Oracle has stepped over the line, according to Michael Majdalany, the administrator of the benchmarks. You're not allowed to estimate results for one machine from results for another or tell people about benchmarks that have not been through the formal TPC vetting process, among other restrictions.

Oracle may have done both. The advertisements ran in the Wall Street Journal and The Economist a month ago, in which Oracle proclaimed "Sun + Oracle is Faster" and told everyone to expect some sort of product announcement on October 14 that would demonstrate that some sort of hybrid Oracle-Sun setup would offer two-digit performance on the TPC-C online transaction processing test compared to IBM's 6 million transaction per minute result on its Power 595 running AIX and DB2. As El Reg previously told you, the original ad showed Oracle and Sun working together on a Sparc cluster of some sort.

Two weeks later, Oracle ran another set of adverts, saying that should it prevail in its $7.4bn (net around $5bn, perhaps) acquisition of Sun, it would increase spending on Sparc platforms, Solaris Unix, and sales efforts for the Sparc/Solaris platform, and would tightly integrate Oracle database and middleware software with the platform to boost its performance. And then, ironically, only five days later, Oracle ditched x64 HP iron, adopted Sun x64 iron and flash storage, and cranked out the Exadata 2 cluster.

Presumably we will see a Sparc/Solaris variant of the Exadata on October 14, most likely using the quad-socket T5440 Sparc T2+ servers instead of x64 servers as database nodes. Why Oracle decided to do the Exadata 2 launch ahead of Oracle's OpenWorld trade show rather than waiting is a mystery. When you're Larry Ellison, chief executive officer at Oracle, you can do whatever you want. Most of the time.

Ever since Oracle and Sun Microsystems speed-launched the Exadata 2 cluster for data warehousing and online transaction processing two weeks ago, I have been scouring the usual suspects of public benchmarks, trying to get proof of the claims that Oracle has been making in its ads and at the Exadata launch. Ellison said that the Exadata 2 setup was five times as fast as data warehousing boxes from Teradata and Netezza (leading everyone to believe there was a TPC-H benchmark to prove this) and said further that for online transaction processing (by which everyone knew Ellison was insinuating a TPC-C test), two racks of the Exadata 2 setup would have the same performance as an IBM Power 595 and cost one quarter the price.

No surprises here, but it was TPC consortium member IBM that made a formal complaint that Oracle was bending the rules. Teradata and Netezza are also TPC members and should have been equally annoyed, but Oracle didn't make any claims in the ads relating to their products. That said, what Oracle said in the launch was as much a violation of the TPC principles, so it's a wonder that Oracle wasn't cited for this as well.

The TPC has fined Oracle $10,000 for violating its fair use rules, as you can see from the letter that Majdalany sent to Michael Brey, Oracle's representative at the TPC consortium. It has also required that Oracle immediately stop running the ads in any publications and to remove a Web page at www.oracle.com/sunoraclefaster where it was making performance claims. That page has indeed been removed.

"Oracle's claim that it is faster than IBM using a TPC-C benchmark result it claimed would be announced on October 14 was not supported because Oracle did not have a TPC result at the time of publication," Majdalany said in a statement. "The TPC requires that claims based on TPC benchmarks must be demonstrable using publicly available data from official TPC benchmark results." Majdalany added that as of today, Oracle has not submitted any paperwork to uphold its performance claims.

That doesn't mean there won't be a double-digit TPC-C OLTP benchmark on October 14 for Sparc-based machines of some sort, or that Oracle and Sun are not cooking up OLTP and data warehousing benchmarks for x64 and Sparc variants of the Exadata boxes.

While IBM will count this as a victory, all IT vendors game the TPC-C test, which was useful when it was announced almost two decades ago but now has so much tweaking and tuning that you have to be careful about using the test results.

Big Blue, for one, seems to have mastered the black art of de-randomizing the data coming out of the TPC-C transactions, steering data to precise bits of system cache on Power Systems boxes running AIX and DB2 - or so other server makers have claimed to me. This tuning may account for nearly half of the performance of the box. I base this assertion on IBM's own Commercial Processing Workload (CPW) benchmarks for the i/OS-Power platform, which are loosely based on the TPC-C test, and its similar Relative Performance (rPerf) test for its AIX-Power platform, which is also derived from the TPC-C test. Several years ago, the OLTP performance of Power servers started to diverge, with AIX machines suddenly able to perform around 50 per cent more OLTP work than their i/OS counterparts. I have attempted to get clarification on this divergence from Big Blue's benchmarkers, to no avail, for many years.

I don't expect it today. But by bringing this up, maybe Oracle and Sun (and HP, who has complained about this to me for years) can encourage the TPC to take a hard look at what IBM is doing, too. Or, at the very least, get Big Blue to explain how one machine seems to do a lot more work than the other. I personally don't believe that the DB2 on the i platform is that much worse than the DB2 for Unix boxes. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.