Oracle shows off M9000s for data warehousing

Not an Exadata

Boost IT visibility and business value

Oracle may not be as chatty as the formerly independent Sun Microsystems, but it sure is more keen on using benchmark tests to help make the case for the products it sells.

And so Oracle has whipped out yet another benchmark, in this case using the TPC-H data warehousing test, to show that the top-end Sparc Enterprise M9000 system can keep pace with whatever its Unix rivals can deliver.

Rather than test a clustered system on the data warehousing benchmark, Oracle decided to put the biggest Sparc/Solaris it has in the barn through the TPC-H paces. However, it only ran the test on half the horse, which is a bit odd.

Sun tested a 32-socket Sparc Enterprise M9000 box using the most current 2.88 GHz Sparc64-VII+ processors; both the processors and the systems are designed and built by Sparc partner Fujitsu. These Sparc64-VII processors were announced back in October 2009 when the Sun acquisition by Oracle was still not a done deal, and at the time, Sun and Fujitsu would only sell M9000 boxes using the quad-core 2.88 GHz chips if customers took a box with all 64 cores (across two cabinets) loaded up to the gills with the chips.

The Sparc64-VII chips have four cores on a single die, along with 6 MB of L2 cache memory; they also support two threads per core using simultaneous multithreading (SMT). For some reason, Oracle turned off SMT during the test and only loaded up half the cores in the box. That would seem to imply that something - perhaps the Oracle 11g R2 Enterprise Edition database used in the TPC-H test - can't scale well beyond 128 threads. Server makers do this kind of crimping all the time, and for what everyone suspects is the same reason.

Oracle loaded up the M9000 machine with 512 GB of main memory, which is again a fraction of the capacity engineered in the box. The M9000 uses four-socket cell boards that are glued into an SMP through a backplane; each cell board has 32 memory slots and the machine uses 8 GB memory sticks, for a maximum memory of 256 GB per cell board and 4 TB per system.

To feed this data warehousing engine, Oracle slapped on a single F5100 flash array (with 80 24 GB flash drives), 32 of its J4200 disk arrays (each with a dozen 600 GB disks), and two of its Sun Storage 6180 arrays (with sixteen 300 GB disks each). When processing against the TPC-H database, implemented in the Oracle 11g R2 Enterprise Edition database with partitioning and automatic storage management slapped on it, the M9000 was able to handle 188,230 queries per hour.

One of the reasons why the TPC suite of benchmarks are important is because they compel vendors to provide list pricing for the complete configuration under test and then show the discount that a typical user should expect on the iron. The test also includes three years of maintenance costs. Sun and Fujitsu do not provide pricing information for the bigger Sparc Enterprise M series machines, so the recent TPC-H test is a rare look at pricing for the M9000 box.

The base M9000 costs $275,000, and each four-way cell board in the machine, equipped with 64 GB of main memory, costs $314,000. Throw in a few disks and other peripheral cards, and you're up to $2.97m for the 32-way server. Oracle premium hardware support - which is now the only option, since lower levels of Sun support are gone - costs $355,569 per year, or another $1.07m over the three years.

The system includes an Ultra 27 workstation as a management console. That 236 TB of storage ran to $934,531, plus $336,341 in maintenance costs over three years.

Oracle slammed down the cost of the 11g R2 Enterprise Edition database by claiming that the machine only had 25 users per core and used its three-year Named User Plus pricing plus, including the 25 per cent discount for multicore processors. Oracle's 11g R2 Enterprise Edition costs $475 per seat, and is supposed to have a $209 per year support cost. But Oracle merely slapped a $2,300 per-incident three-year support contract on the system, ducking about $2m in support costs. And the named user pricing that Oracle used instead of per-processor pricing cut the cost of the Oracle 11g database licenses in half, to $1.14m.

When you add it all up, the Oracle Sparc Enterprise M setup cost $6.73m, and after a 43.5 per cent discount, the price dropped to $3.8m, or $20.19 per QPH. If you back into the per-processor licensing and add in proper support for the database, then apply the 43.5 per cent discount, the price would be $5.58m or closer to $30 per QPH.

Welcome to the wonderful world of benchmarketing. By the way, IBM's Power 595 server that was tested in November 2009 was configured with an absurdly inexpensive Sybase IQ 15 database to achieve its results. Nothing against Sybase, but it is only used in niche areas, like financial services.

The Oracle database is the one that would be used by most customers building data warehouses on the 64-core Power 595, and it would be just as expensive on the Power 595 as it really is, once you untwist the pricing, as it is on the Sparc Enterprise M9000. In any event, that 64-core Power 595 machine, running AIX 6.1 and using 5 GHz dual-core Power6 processors, 512 GB of main memory, and 20 TB of disk (seemed a bit skinny) was able to process 156,537 QPH on the TPC-H test using the 3 TB database.

The box cost $6.25m, and after a 48.4 per cent discount, the bang for the buck for this Power 595 machine came to $20.60. This is clearly the metric Oracle needed to beat, and one it will need to beat later this fall when IBM puts its 256-core Power 759 behemoth into the field.

The most important thing for the customers of the formerly independent Sun is that Oracle is at least in the fight, and is showing that the Sparc Enterprise M9000 can deliver the queries and at a comparable price. It is also important for Oracle to demonstrate that the M9000 has more oomph and better bang for the buck compared to the Sun Fire E25K, the last of the big machines actually designed and built by Sun.

An E25K tested three years ago using 72 dual-core UltraSparc-IV+ processors running at 1.8 GHz, 288 GB of memory, and 63 TB of disk was able to handle 114,714 QPH on the same 3 TB TPC-H test running Solaris 10 and Oracle 10g R2 with the partitioning and storage management. Oracle did the same named user and cheap support pricing on this data warehouse, and was able to show a price/performance of $36.68 per QPH.

A year earlier, an E25K using 1.5 GHz chips and having more expensive server and storage hardware pricing could only do 105,431 QPH at a cost of $54.87 per QPH. Two years earlier, an E25K using 1.2 GHz UltraSparc-VI processors could do 59,436 QPH at a cost of $101 per QPH. In five years time, Sun - and now Oracle - has been able to boost performance by more than a factor of three and boost price/performance by a factor of five.

So, yeah, there is some funniness in the benchmark pricing. But for the sake of negotiation, the TPC-H test is still useful to show how far Oracle and IBM are willing to discount to close a deal. And therefore, they are useful. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story


5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.