Exploring Flash beyond the performance
Pricing out the IOPS
What really matters in choosing an all-flash array is a wider view of performance and price than a simple look at I/Os per second (IOPS). Handily, this information is available in the Storage Performance Council (SPC) benchmarks, which provide solid guidance on choosing a flash array vendor.
The SPC-1 benchmark looks at storage-array performance, the response time at varying percentages of workload and the price/performance. Each system's configuration is fully described and priced, with relevant software and hardware settings details included so that users can be sure the performance is reproducible.
A look at the benchmark results over time shows how performance has evolved from the time when the disk-array’s IOPS and response time were gated by disks' spin speeds and seek times to the remarkably improved response time and IOPS of the all-flash array.
The numbers are impressive, with a $2.37m Huawei array, for example, achieving three million-plus IOPS at a cost of $0.79/IOPS; a NetApp $1.5m array scored 2.4 million IOPS at a cost of $0.62/IOPS. However, these are not affordable systems for typical enterprise businesses wanting primary data storage.
If we set a bar of $250,000 for all-flash array cost, then the picture changes. The best-ranked systems are achieving more than 500,000 SPC-1 IOPS – levels that were unheard of in the days of the disk array when 250,000 IOPS was a hero number.
An HP StoreServ 8450, costing $126, 558.24, was rated at 545,164.29 IOPS in March 2016. A Fujitsu Eternus AF650 S2 all-flash array scored 620,153 SPC-1 IOPS in January this year.
A NetApp E-Series EF570 achieved 500,022 IOPS in September 2017. The chart above ranks classic storage array systems like these on the SPC-1 benchmark by their raw IOPS numbers.
What is also remarkable is the improvement in price, with no loss of raw performance, as flash technology has matured. It started out as two-dimensional or planar NAND, with 1bit/cell SLC technology. Capacity doubled, with 2bits/cell MLC technology, albeit with cell access speed and endurance not as good as SLC flash.
Then it worsened as cell capacity increased again with 3bits/cell (TLC) flash. Each capacity increase lowered the cost of flash. Advances in controller technology, flash management software and simple over-provisioning countered the performance disadvantages of MLC over SLC flash and TLC over MLC, so that solid-state drive-level performance generally increased.
In the past three or four years three-dimensional flash has arrived, with flash chips having multiple layers to increase capacity further. There were drives using 32 layers, then 64 layers currently being introduced, and 96 layers in prospect. The drive level capacity increases have been dramatic: the 8TB and even 15TB drives now available offer a great advance on earlier sub-1TB drives.
We can see the effect of this capacity improvement reflected in the SPC-1 price/performance data. Here is a chart ranking many storage arrays, including hero number ones, by the dollar cost of their SPC-1 IOPS.
The improvement has been huge. In February 2012, almost six years ago, an Eternus DX80 S2 produced 34,995.02 SPC-1 IOPS at a cost of $2.25. The numbers were good for that time.
Today, an all-flash Eternus, the AF250 S2, leads the price/performance ranking with an $0.10/SPC-1 IOPS cost, more than 22 times better.
The greater affordability of flash means that SSD-based arrays are preferred over HDD arrays for the primary data storage used in mission-critical business operations.
Enterprise use is soaring, as fast 15,000rpm disk drives are replaced by SSDs with higher capacity.
Seagate’s Enterprise Performance 2.5-inch disk drive, for example, holds 300GB, 600GB or 900GB of data and has a 315-215MBps transfer rate.
A modern SSD is almost laughably faster. A Samsung PM1633a, also in a 2.5-inch form factor, holds up to 15.36TB of data and has sequential read/write bandwidth of up to 1.3/1.35GBps through its 12Gbit/s SAS interface. Its latency is in the region of 100 microseconds, compared with the disk drive’s millisecond rating.
Even with small capacity SSDs the use of deduplication, removing redundant repeated strings of data bits, can increase their effective capacity by up to five times or more. An example of this can be seen in Fujitsu’s 5U Eternus AF650, with its up to 192 SSDs providing capacities of 2,949TB raw, 14,745 TB effective, assuming a 5:1 data reduction ratio.
Another example is the Eternus AF250, which has up to 48 SSDs in its 2U enclosure, with 737TB raw/3,686 TB effective capacity.
This translates to smaller storage arrays. The number of rack units needed to store data can be reduced three, four or more times by moving to all-flash arrays
Deduplication would slow data access with disk-drive arrays, so if the 2U AF250 was replaced by disk drives with the same effective capacity it would need 767 Seagate 600GB Enterprise Performance drives.
As well as taking up more space, a disk drive need more power than a flash array because the drives have to be spun and the read/write heads moved, which also means the disk drive array needs more cooling.
Assuming a storage array is not replaced for five years, it will probably be expanded during that period. Every extra drive array drawer will take up more space than the equivalent flash array drawer and will need more power and cooling.
The use of enterprise SSD is expected to soar. In IDC’s Marketscape Worldwide All-FlashArray 2017 Vendor Assessment report Eric Burgener, the analyst’s research director for storage, states: “All-flash arrays are dominating primary storage spend in the enterprise, driving over 80 per cent of that revenue in 2017.”
Today’s all-flash arrays offer an unbeatable mix of raw performance, capacity and lifetime operational costs for primary storage. Where disk drive arrays score is in bulk capacity storage, the secondary or nearline storage use cases where data access performance is not as important.
All-flash arrays are less effective in that area – for now. Who knows what might happen in a few years’ time with 96-layer 3D NAND and beyond, twinned with QLC (4bits/cell) technology?
This article was supported by: Fujitsu