IBM wins benchmark crown with SSD power
Another tick in STEC's box
IBM has blasted the SPC-1 benchmark into the stratosphere with a flash-equipped POWER6 server.
The Power 595 server, using 48 cores and 84 STEC flash drives, achieved 300,993.85 SPC-1 IOPS (pdf). The total system cost was $3.2m with the per-SPC-1 IOPS being $10.77.
IBM held a previous record with its SAN Volume Controller (SVC), fitted with 1,534 hard disk drives (HDDs), achieving 274,997.55 SPC-1 IOPS.
The SVC uses System X servers for its processing hardware.
The Storage Performance Council's SPC-1 benchmark is a single application with mainly random I/O queries and writes applied to a storage subsystem. It aims to indicate the performance you could expect from that subsystem used for OLTP, database, and mail server-type applications.
Typical high-end array benchmarks have been at the 200,000-220,000 SPC-1 IOPS Level. IBM's SVC benchmark raised that to the 275K IOPS Level and now its Power 595 server, using only 84 ZeusIOPS STEC Solid state drives (SSDs) has pushed it past the 300,000 mark.
This is higher than the 291,208.58 SPC-1 IOPS recorded by Texas Memory Systems' RamSan-400, a DRAM-based solid state drive. The per-SPC-1 IOPS cost for that result was a piffling $0.67 which compares with the much more expensive Power 595 IOPS cost of $10.77.
The SSD portion of the Power 595 system cost was an estimated $722,000, $12,448 per drive, though actual customers would typically get a discount on this of maybe 30 to 40 per cent. It's still not cheap, and there are enough dollars in there to give STEC a satisfying price per drive.
The STEC drives had a raw capacity of 128GB. IBM configured them as 69GB with the remainder presumably used for wear-levelling. They were SAS-connected with six per enclosure and a RAID adapter per enclosure with a PCI-X to SAS bridge involved. Total usable SSD capacity was 5.8TB across 14 enclosures.
The Power 595 server had 64 cores on 32 chips but 16 cores were left unused by the PowerVM and AIX software.
Where does this leave us? The Power 595 server equipped in this fashion is an expensive box but it blows away the hard drive competition. In effect, the era of short-stroking disk drives, as with the SVC's use of 1,534 drives in the benchmark above, is over. EMC put the first nail in that coffin with its use of STEC drives in Symmetrix and this latest IBM SPC-1 result hammers another great nail in the short-stroking coffin lid. As SSD prices come down and energy-related cost issues rise up, using the SSD route to break out of a storage IOPS bottleneck will become commonplace.
Another impressive SVC IOPS feat was the QuickSilver project with one million IOPS achieved by using 40 Fusion-io ioDrive SSDs. Why hasn't IBM used such a configuration in the SPC-1 benchmark? QuickSilver was a demonstration and didn't use a sellable product. We're expecting SVC announcements soon though.
This STEC Power 595 benchmark is also a reminder to us that STEC is not just about SSDs replacing hard disk drives. Fusion-io may have grabbed a lot of media airtime with its QuickSilver demo, the similar HP one, and the recruiting of Steve Wozniak, but SSDs are about flash drives. Fusion-io has the media flash, but does it have the drive? ®
STEC Press Release; investors fooled by IOPS malarky?
Splashed all over the financial news sites last week was a STEC Press Release that says:
"The integration in IBM's Power 595 system, which deploys six STEC ZeusIOPS Solid State Drives (SSDs) within each expansion drawer, achieves an unprecedented 300,993.85 SPC-1 IOPS".
Ok...investors read this and see six STEC SSDs doing 301K IOPS -- 50,000K IOPS per SSD. This aligns quite well (and just oh-so-conveniently) with the numbers STEC states in it's SEC filings: 80,000 IOPS (read) and 40,000 IOPS (write).
Naturally, even sophisticated investors conclude that STEC's performance claims vs. HDD are now validated. Therefore STEC SSDs really ARE worth $13,000 each because $13K/50KIOPS equals a measley 26 pennies per IOP. Since even the best cost/IOP HDDs ring in at somewhere around $1/IOP, STEC's entire market premise is confirmed. STEC investors keep their STEC shares, and probably buy more.
-- Problem is, it required EIGHTY-FOUR SSDs in the test, not six, to get 300K IOPS.
-- Problem is, each STEC SSD only does 3,580 IOPS, about 1/14th of claimed IOPS
-- Problem is, STEC cost per IOP is $13K/3.6K = $3.77/IOP -- not better than HDD,
but many times WORSE than HDD.
Of course, if a STEC investor saw THESE numbers, they might conclude that they should dump STEC now.
Funny thing. STEC's cofounders, the CEO and COO dumped a quarter-billion of their STEC in August.
Chris Mellor's question and Golden STEC-Eggs
Mr. Mellor opined, and asked:
"Another impressive SVC IOPS feat was the QuickSilver project with one million IOPS achieved by using 40 Fusion-io ioDrive SSDs. Why hasn't IBM used such a configuration in the SPC-1 benchmark?"
Simple...IBM's and EMC's resale profits on the STEC SSD are roughly 5x higher than FusionIO.
Going with Fusion-IO would be like killing the Goose that Laid The Golden STEC-Egg.
How Golden? Well...80GBytes of SLC Flash costs $320.00 (need to 2x this to cover write amplification) and the controller BOM cost (STEC or Fusion-IO) is in the $200 range. Total BOM cost for either is under $1,000. Meanwhile, the 80GB Fusion-IO drives in HP's TPC-H cost $3,000, while the the 69GB STEC in IBM's SPC-1 costs $13,500.
Simple math: With STEC, here is more than $10,000.00 of incremental profit margins to split with IBM and EMC.
STEC's price/performance ratios are now shown (in multipla audited benchmarks) to be absolute crap compared to spinning rust. Therefore, STEC's only workable value proposition is it's ability to flood the coffers of storage vendors with SSD hype-cycle cash.
In this respect, Fusion-I/O just can't compete.
Well I don't think you understood what this benchmark is all about, and that is not meant in a negative way :). I must admit I didn't get it first either, I was like "what the f***", but after having a look at the FDR I got it.
This is not a storage benchmark. Basically the SDD's are kind of irrelevant.
This is a benchmark that shows what the Virtual IO Servers, that you use to virtualize disks inside a power server, can pull through of IO's per sek.
So if you for example compare this benchmark to the IBM DS5020 benchmark (I use IBM<->IBM so as not to bring any platform religion into this), then the difference is that in this benchmark what corresponds to the DS5020 control unit, the whole cabling, switches and HBA's on the servers that run the workload, are all implemented in virtualization software inside the power 595.
And seen in that light it's a pretty good result, that server virtualized storage can beat dedicated hardware and do it with a 5 ms response time.
The actual stack is psysical SAS->Virtual server SCSI adapter ->Virtual client SCSI adapter.
It could most likely be done faster with 'virtual SAN' NPIV
I would very much have liked to see the CPU utilization and RAM usage of the VIO server under this benchmark run, as that is what is really interesting.
And as for the price.. well hey the IBM dorks priced the whole machine load generator and all into the price. So basically the price that is listed here is the storage + a highend server that would run your whole DB workload and all. Not just the storage part as it is in all the other submissions.