Oracle shows off M9000s for data warehousing

Not an Exadata

Intelligent flash storage arrays

Oracle may not be as chatty as the formerly independent Sun Microsystems, but it sure is more keen on using benchmark tests to help make the case for the products it sells.

And so Oracle has whipped out yet another benchmark, in this case using the TPC-H data warehousing test, to show that the top-end Sparc Enterprise M9000 system can keep pace with whatever its Unix rivals can deliver.

Rather than test a clustered system on the data warehousing benchmark, Oracle decided to put the biggest Sparc/Solaris it has in the barn through the TPC-H paces. However, it only ran the test on half the horse, which is a bit odd.

Sun tested a 32-socket Sparc Enterprise M9000 box using the most current 2.88 GHz Sparc64-VII+ processors; both the processors and the systems are designed and built by Sparc partner Fujitsu. These Sparc64-VII processors were announced back in October 2009 when the Sun acquisition by Oracle was still not a done deal, and at the time, Sun and Fujitsu would only sell M9000 boxes using the quad-core 2.88 GHz chips if customers took a box with all 64 cores (across two cabinets) loaded up to the gills with the chips.

The Sparc64-VII chips have four cores on a single die, along with 6 MB of L2 cache memory; they also support two threads per core using simultaneous multithreading (SMT). For some reason, Oracle turned off SMT during the test and only loaded up half the cores in the box. That would seem to imply that something - perhaps the Oracle 11g R2 Enterprise Edition database used in the TPC-H test - can't scale well beyond 128 threads. Server makers do this kind of crimping all the time, and for what everyone suspects is the same reason.

Oracle loaded up the M9000 machine with 512 GB of main memory, which is again a fraction of the capacity engineered in the box. The M9000 uses four-socket cell boards that are glued into an SMP through a backplane; each cell board has 32 memory slots and the machine uses 8 GB memory sticks, for a maximum memory of 256 GB per cell board and 4 TB per system.

To feed this data warehousing engine, Oracle slapped on a single F5100 flash array (with 80 24 GB flash drives), 32 of its J4200 disk arrays (each with a dozen 600 GB disks), and two of its Sun Storage 6180 arrays (with sixteen 300 GB disks each). When processing against the TPC-H database, implemented in the Oracle 11g R2 Enterprise Edition database with partitioning and automatic storage management slapped on it, the M9000 was able to handle 188,230 queries per hour.

One of the reasons why the TPC suite of benchmarks are important is because they compel vendors to provide list pricing for the complete configuration under test and then show the discount that a typical user should expect on the iron. The test also includes three years of maintenance costs. Sun and Fujitsu do not provide pricing information for the bigger Sparc Enterprise M series machines, so the recent TPC-H test is a rare look at pricing for the M9000 box.

The base M9000 costs $275,000, and each four-way cell board in the machine, equipped with 64 GB of main memory, costs $314,000. Throw in a few disks and other peripheral cards, and you're up to $2.97m for the 32-way server. Oracle premium hardware support - which is now the only option, since lower levels of Sun support are gone - costs $355,569 per year, or another $1.07m over the three years.

The system includes an Ultra 27 workstation as a management console. That 236 TB of storage ran to $934,531, plus $336,341 in maintenance costs over three years.

Oracle slammed down the cost of the 11g R2 Enterprise Edition database by claiming that the machine only had 25 users per core and used its three-year Named User Plus pricing plus, including the 25 per cent discount for multicore processors. Oracle's 11g R2 Enterprise Edition costs $475 per seat, and is supposed to have a $209 per year support cost. But Oracle merely slapped a $2,300 per-incident three-year support contract on the system, ducking about $2m in support costs. And the named user pricing that Oracle used instead of per-processor pricing cut the cost of the Oracle 11g database licenses in half, to $1.14m.

When you add it all up, the Oracle Sparc Enterprise M setup cost $6.73m, and after a 43.5 per cent discount, the price dropped to $3.8m, or $20.19 per QPH. If you back into the per-processor licensing and add in proper support for the database, then apply the 43.5 per cent discount, the price would be $5.58m or closer to $30 per QPH.

Welcome to the wonderful world of benchmarketing. By the way, IBM's Power 595 server that was tested in November 2009 was configured with an absurdly inexpensive Sybase IQ 15 database to achieve its results. Nothing against Sybase, but it is only used in niche areas, like financial services.

The Oracle database is the one that would be used by most customers building data warehouses on the 64-core Power 595, and it would be just as expensive on the Power 595 as it really is, once you untwist the pricing, as it is on the Sparc Enterprise M9000. In any event, that 64-core Power 595 machine, running AIX 6.1 and using 5 GHz dual-core Power6 processors, 512 GB of main memory, and 20 TB of disk (seemed a bit skinny) was able to process 156,537 QPH on the TPC-H test using the 3 TB database.

The box cost $6.25m, and after a 48.4 per cent discount, the bang for the buck for this Power 595 machine came to $20.60. This is clearly the metric Oracle needed to beat, and one it will need to beat later this fall when IBM puts its 256-core Power 759 behemoth into the field.

The most important thing for the customers of the formerly independent Sun is that Oracle is at least in the fight, and is showing that the Sparc Enterprise M9000 can deliver the queries and at a comparable price. It is also important for Oracle to demonstrate that the M9000 has more oomph and better bang for the buck compared to the Sun Fire E25K, the last of the big machines actually designed and built by Sun.

An E25K tested three years ago using 72 dual-core UltraSparc-IV+ processors running at 1.8 GHz, 288 GB of memory, and 63 TB of disk was able to handle 114,714 QPH on the same 3 TB TPC-H test running Solaris 10 and Oracle 10g R2 with the partitioning and storage management. Oracle did the same named user and cheap support pricing on this data warehouse, and was able to show a price/performance of $36.68 per QPH.

A year earlier, an E25K using 1.5 GHz chips and having more expensive server and storage hardware pricing could only do 105,431 QPH at a cost of $54.87 per QPH. Two years earlier, an E25K using 1.2 GHz UltraSparc-VI processors could do 59,436 QPH at a cost of $101 per QPH. In five years time, Sun - and now Oracle - has been able to boost performance by more than a factor of three and boost price/performance by a factor of five.

So, yeah, there is some funniness in the benchmark pricing. But for the sake of negotiation, the TPC-H test is still useful to show how far Oracle and IBM are willing to discount to close a deal. And therefore, they are useful. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
More alleged private, nude celeb pics appear online
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
EMC, HP blockbuster 'merger' shocker comes a cropper
Stand down, FTC... you can put your feet up for a bit
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
prev story


Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.