Huawei: Half a million IOPS? Pah, we can do better
SPC-1 speedster now fastest ever all-flash array
Huawei has captured the SPC-1 crown for disk drive and flash arrays with a 600,000-plus IOPS result for its Dorado5100 all-flash array.
The SPC-1 benchmark tests how networked storage arrays serve data requests from servers in a business environment. IBM's StorWize V7000 headed up the SPC-1 charts with a 520,043.99 result scored in January this year: the first time a system had breached the half-million IOPS level. Its average response time was 7.39ms and the cost/IOPS was $6.92. Its total system cost was $3,598,956.00 at list price.
SPC-1 Results table
Kaminario topped a million IOPS a few weeks ago but that was with an all-DRAM system, so it doesn't really count unless you need stratospheric performance. Back in the real world Huawei's OceanSpace Dorado5100 array achieved 600,052.40 SPC-1 IOPS with an average response time of 1.09ms and a cost/IOPS of $0.81. The total system list price cost was $488,617.00, meaning that an all-flash system which cost less that half a million dollars was faster overall and faster to respond than a $3.6m IBM storage array based on disks.
Now that IBM has bought TMS, which also features in the SPC-1 lists, as the chart shows, we can expect IBM results to perk up.
The Dorado5100 used mirrored flash, with a raw capacity of 19.2TB, and addressable capacity of 6.44TB. It had four flash enclosures, each with 24 x 200GB SSDs and a controller. Altogether there were two active-active controllers.
Huawei, IBM and TMS are regulars in the SPC-1 benchmark tests, each over-taking the others in turn. Will IBM and TMS, now one organisation, find a chink in Huawei's armour and overtake it again?
A stray thought: with more suppliers – and mainstream suppliers at that – entering the all-flash array product space, how long must we wait for Gartner to produce an all-flash array Magic Quadrant diagram? Won't that be fun to look at? ®
Re: IBM SVC is the only real world system
Anyone looking for more than "One big storage box to rule them all" will look at a flash array for their transactional systems.
Also as the dedupe tech gets better and we see tiers of SSD inside a single box (DRAM Cache, Small amount of SLC for hot data and 2/3 cell MLC for "Bulk" storage) I think we will see all flash arrays in more places.
Even 10 years ago the idea of 500+ disks in the same frame was thought to be impossible, now we have arrays that scale to 2000 spindles and have 3 or 4 tiers with automatic data placement. Whats to say what another 10 years will bring?
The SVC advertising is all very well, but what did SVC actually provide in the SPC configuration ?. it didn't provide raid or any other storage related services other than acting as a cache. Basically SVC's role in the SPC-1 benchmark was to provide a passthru device in order to make 16 separate mid range arrays, appear like a single array.. Wwho in their right mind would actually deploy such a system (another lab queen)..
Re: I wonder where the difficulties lie
The difficulty is that scaling out anything with no guarantees of reliability is easy, but scaling out while maintaining the reliability of a single node system is orders of magnitude harder to do while having performance scale out as well.
Pushing everything into the same box with a dedicated bus or interconnect, instead of a capacity bottlenecked, high latency network to transfer data around, makes this much easier - as then you're effectively dealing with a single node system with thousands of smaller subcomponents. Having said this, this strategy can work really well with streaming data, but tends to fall over when you have tens of thousands of smaller requests to process.