Oracle stuffs Mongolian clusters with Sparc T3s
From McNealy's sunset to Ellison's Sunrise
It's Sunrise at Oracle. But apparently, someone hit Larry Ellison's snooze alarm. The Oracle chief exec was 18 minutes late for the launch of two preconfigured Sparc/Solaris clusters that are part of a "Sunrise" reanimation of the Sparc platform.
Late or not, Ellison was eager about the prospects of the Sparc versions of the Exalogic cluster appliance running Solaris and a new general-purpose "Sunrise" SuperCluster for running either database or middleware on Solaris (or both at the same time). With the launch of the two machines, Oracle can peddle Solaris clusters right alongside its x64-Linux behemoths in an effort to steal away some business from IBM and Hewlett-Packard.
"For all of our competitors who have been enjoying their 'sunset' and 'sundown' programs, this is the end of that," said Ellison with a chuckle. "The Sunrise program is all about Sparc and it is all about Solaris."
Oracle CEO Larry Ellison: No more sunset jokes, please
The Sparc SuperCluster is a preconfigured machine built from three different basic server components. Two flavors of the SuperCluster are based on Oracle's Sparc T3 servers, announced in September and using Oracle's 16-core Sparc T3 processors, while another is based on the Fujitsu-developed and manufactured M5000 server, which uses Fujitsu's Sparc64 family of chips.
The first SuperCluster configuration, which Oracle used on a TPC-C benchmark test (more on that in a separate story), is based on four two-socket Sparc T3-2 servers. Each T3-2 is configured with two 1.65 core Sparc T3 chips, 128 GB of DDR3 main memory, two 300 GB 10K RPM disks, and four 96 GB PCI-Express flash memory accelerator cards. (You can see the spec sheet here.) The resulting compute portion of the SuperCluster has 512 GB of total memory and 64 cores and 512 threads, with 96 TB of disk capacity.
According to Ellison's presentation, Oracle will offer a SuperCluster setup based on a four-socket Sparc T3-4 servers with a total of a dozen processors (exactly how this is done is not clear, unless one socket is empty in each box) with a total of 192 cores and 1,538 threads, backed by 1.5 TB of memory plus those flash accelerators and 144 TB of disk storage. The third option puts a pair of M5000 midrange servers with 16 Sparc64-VII+ processors, which yields 64 cores and 128 threads, with 1 TB of main memory and 144 TB of disk.
All three machines have 1.7 TB of write-optimized flash memory and 4 TB of read optimized flash memory. The SuperClusters are not totally Sparc-based. They uses the Xeon-based Sun Storage 7420 disk arrays running Solaris and ZFS for that disk storage. The compute and storage modules in the cluster are linked by Oracle's own 40 Gb/sec InfiniBand switches. The SuperCluster T3-2 machine comes configured with Solaris 10 9/10 and VM Server for Sparc 2.0, the virtualization stack for Sparc-based machines.
Oracle's Sparc SuperCluster T3-2
As is the case with the Exadata database and Exalogic Web application server appliances based on X64 iron and Oracle's Enterprise Linux operating system, multiple SuperCluster frames can be linked together to scale up the performance of the box. It is not yet clear how far they can scale. On a TPC-C transaction processing test that Oracle is bragging about today, a cluster of 108 Sparc T3 machines with a total of 1,728 cores and 13.5 TB of main memory, 246 TB of flash, and 1.37 petabytes of disk storage was able to handle more than 30 million transactions per minute with an average response time of a half-second on those transactions.
Ellison was at pains to explain to customers that the SuperCluster machines are aimed at customers who wanted to run non-Oracle database, middleware, or application software and were also designed to support database and middleware workloads at the same time as well as application code.
Oracle also pre-announced a Sparc version of the Exalogic Web application server cluster, which it intends to ship in the first quarter of 2011. The Exalogic Elastic Cloud based on X64 iron and Linux debuted back in September at Oracle OpenWorld.
The Sparc Exalogic Elastic Cloud is based on the Sparc T3-1B blade server. This cluster has 30 of those blades, with a total of 480 cores spinning at 1.6 GHz, for running Oracle's middleware stack. Those blades have an aggregate of 3.8 TB of memory and 960 GB of mirrored solid state disks. The Sparc Exalogic system has 40 TB of clustered disk storage, plus 4 TB of read cache and 72 GB of write cache, plus a 40 GB/sec InfiniBand fabric linking it all together.
As El Reg goes to press, there is no spec sheet on this Sparc Exalogic machine yet, but Ellison said it was "far and away the fastest Java machine in the world" and that it was highly tuned for running Java applications, unlike the more generic Sparc SuperClusters. That seems to imply that it is running Solaris 11 Express, the development release for next-year's Solaris 11 production release. Solaris 11 Express was delivered two weeks ago.
But maybe not. Ellison said of the production-grade Solaris 11 that Oracle is benchmarking the forthcoming operating system and "general availability is not so far away."
In addition to the new clustered systems, Ellison announced a new gold level of service for the Exadata data warehousing and OLTP appliances based on Linux and X64 iron, on the SuperClusters based on the Sparc and Solaris combo, and the two flavors of the Exalogic Elastic Cloud middleware clusters (X64-Linux and Sparc-Solaris). With gold level service, customers can buy the exact configurations that Oracle employs in its labs to do regression tests as it patches microcode, Solaris, and the higher parts of the software stack.
The idea is to give an even higher level of assurance that the patches Oracle is creating for its software won't wreck your systems because they didn't wreck Oracle's. Presumably Oracle will be able to charge a premium for the gold-level support, either in a support contract or in a higher initial sticker price.
Oracle did not announce pricing for the SuperCluster T3-2 machine at the launch event today, but I'll be able to pull it apart from the TPC-C benchmark test results. Pricing for the Sparc-Solaris version of the Exalogic Elastic Cloud will not be available until it ships early next year. ®
Do it then. The only reason no one did it in the past was that it was not easy. Oracle and Sun are making it "easy" with these prebuilt solutions.
The naysayers that kept saying that RAC could not scale are now realizing how wrong they were. IBM will not be able to beat this with any of their own tech. They will have to use Oracle.
Cheaper, faster, and more resilient than IBM. Who could ask for more?
BTW, where is HP in this discussion? Oh yeah, they gave up on the high-end of the market...
tpmc/core 17.506 19.913
Hmmm, 20,000 tpmc/core and 0,25 per license...
That's 80,000 tpmc per Oracle Enterprise license!!!
Nicely done! IBM POWER did 80,000 tpmcs per license about 3 or 4 years ago.
Oracle is catching up very very fast!!! :)
And so the benchmark war continues
First a congrats to oracle on a well done benchmark, they have retaken the Clustered TPC-C benchmark throne !
And it's actually quite a feet to get RAC to scale to 27 nodes, I look forward to seeing what they did.
But it looks like Oracle is up to their usual Software license tricks, It's again only leased software for 3 years with websupport only and you do not actually buy the licenses.
Now the prices are: (Price and Support)
Oracle 47.500 10.450
RAC 23.000 5.060
Partitioning 11.500 2.530
With 1728 processors each trickering a 0.25 license (The copy of the oracle licensing document that Firefox had cached on my HD, actually didn't have an entry for the T3, so I kind of went HMM.. but I found the updated one)
So the real prices would be 35.424.000 and 23.379.840 for 3 years support for a total of 58.803.840 now that is quite a bit more than the 24MUSD that is used when you lease the machines.
Also this time Oracle seems to be able to give some fat discounts. 50% versus the 15% that was used in the last submission.
Machine T5440 T3 T5440 T2+
# machines 27 12
tpmc/machine 1.120.359 637.207
tpmc/core 17.506 19.913
tpmc/Thread 2.188 2.489
Let the battle begin, cause IBM gotta respond to this one :)=