Oracle and IBM fight for the heavy workload
Count the cost
Whether customers choose the high-capacity or high-performance options, the price is the same: $1.1m per rack. Each disk drive in the Exadata storage server also requires a $10,000 software licence, which is $120,000 per storage node, and $1.68m across all 14 storage nodes.
Oracle's database software is not included on this machine and neither is RAC. The 11g Enterprise Edition database costs $47,500 per core, with a 0.5 scaling factor down to $23,750, and RAC costs $23,000 per core, or half that with the scaling factor to $11,500. So with 96 cores total in the rack, the Exadata X2-2 will cost $4.47m at list price.
The Exadata X2-8 takes out the eight two-socket servers in the X2-2 full-rack configuration and replaces them with two eight-socket Sun Fire X4800 servers, announced in June 2010. These database nodes are equipped with Intel's Xeon X7560 processors, which have eight cores running at 2.27GHz.
The X4800s are configured with 1TB of main memory each, more than ten times as much main memory as in the X2-2 database nodes. The X4800 servers have eight 300GB SAS disks spinning at 10K RPM and eight QDR InfiniBand ports to link to the switches that hook into the Exadata storage nodes and to other nodes in the cluster.
Oracle is careful not to give performance benchmarks that compare the X2-2 to the X2-8, but obviously the feeds and speeds of the Exadata storage are the same. The X2-2 has 96 cores, compared with 128 for the X2-8, but the Xeon 5600 cores run faster than the Xeon 7500 cores.
But you have to take into account the overhead of using RAC. A two-node cluster should have less overhead than an eight-node cluster, and with 1TB of memory, you might be able to get an entire database in one node. In any event, you can glue as many as eight X2-8 racks together, and Oracle Linux and Solaris 11 Express are options for the database servers.
What we do know is that a rack of Exadata X2-8 costs $1.65m, 50 per cent more than a rack of the X2-2 machines. The 11g and RAC stack costs $2.26m across those 128 cores, which is 33 per cent more than the software on the X2-2 machines. The Exadata storage software costs the same.
All told, an X2-8 costs just a little more than $5.59m at list price, 25 per cent more than a configured rack of the X2-2 systems.
While Oracle is keen on pitching the Exadata machines as being suitable for either OLTP or data warehousing workloads, it is fair to assume that those running OLTP jobs will prefer the fatter database nodes, if only because they more closely resemble the fat SMP nodes most customers are used to.
No matter what nodes customers want, Oracle has been open about pricing and offers configurations such as two database servers and three Exadata storage servers to let companies start out small.
IBM embraces diversity
Unlike Oracle, which is pitching the Exadata machines as suitable for transaction processing or data warehousing, IBM has different machines for different purposes – and would very likely argue that its big System z and Power 795 SMP servers are better in many cases for OLTP than Oracle's Exadata clusters.
IBM does have a parallel implementation of its DB2 database for AIX, called PureScale, and has sold Parallel Sysplex for transaction processing on up to 32 mainframes in a cluster since 1994. It has sold a parallel system clustering technology for its AS/400 systems and DB2/400 database called DB2 Multisystem since 1995.
And in September 2011, IBM combined PureScale with highly tuned WebSphere middleware to create yet another variant of the parallel database called WebSphere Transaction Cluster Facility. This is aimed at the very intensive transaction processing environments – think reservation systems and financial processing systems – that used to be the domain of IBM's z/Transaction Processing Facility environment for mainframes.
IBM doesn't just do data warehousing, either. There are x86-based Netezza appliances with hardware-accelerated data chewing (akin to Oracle's Exadata), as well as the Smart Analytics System range, which are tuned versions of x86, Power or mainframe servers with InfoSphere Warehouse and Cognos analytics software all set up and pre-tuned for the machines.
DB2 PureScale is a feature of IBM's database, not a separate line of machines like Oracle’s; it is available only on Power Systems at the moment – and only running AIX.
There was talk in October 2009, when PureScale was announced as Oracle was Sunning up its Exadata clusters, that PureScale would be ported to Windows and Linux systems, but this has not happened. There is no way IBM will offer DB2 PureScale clustering on HP-UX or Solaris, although there is no technical reason why it couldn't.
Like Exadata clusters from Oracle, DB2 PureScale makes use of InfiniBand clustering to link multiple server nodes equipped with AIX and DB2 together.
The PureScale setup has a designated database access node, which functions like the head node in a parallel supercomputing cluster. It manages the locking of database fields as transactions are processed and the locking and unlocking of memory in all of the nodes in the cluster as they seek information from each other as part of the OLTP cranking.
The nodes are linked fairly tightly using the Remote Direct Memory Access features of InfiniBand, which means the processors are cut out of the networking stack, unlike TCP/IP clustering techniques. The central caching server is mirrored so it is not a single point of failure.
IBM says this PureScale approach cuts down on the intra-node communications that normally happen in a parallel database implementation. Also speeding up intra-node communications is the fact that PureScale makes use of the 12X remote I/O port on Power processors. This 12X I/O port is a variant InfiniBand that IBM has tweaked to allow remote I/O drawers crammed with disk controllers and disks or SSDs.
Next page: Netezza in the net