Hybrid storage arrays versus all-flash: will disks hold their own?

How to match your storage to your needs

In the early days of smartphones, some had hard disks in them – tiny devices storing a gigabyte or two on a single one-inch (or smaller) disk platter.

This was mainly because flash was expensive and untrusted, whereas people knew where they were with hard disks. We all know how it turned out, though. Flash memory grew cheaper, plus it was more compact and had no moving parts, and before long the miniature hard disk had become an evolutionary dead-end.

Many predict that the same thing is about to happen in the data centre, as all-flash storage arrays shoulder aside the “spinning rust” that has ruled the roost for so long.

They think the process might take longer than if did for phones, given the huge installed base, but that the economics of performance and power consumption make flash's ascendancy inevitable, for tier-one storage at least.

“For applications that need predictable performance, the industry will move to flash,” says Jay Prassl, marketing vice president at flash array specialist SolidFire.

“Firstly as the cost comes down and makes it more affordable, and secondly as legacy technology starts to amortise, for example if you're already looking to spin down your EMC systems.”

However, other data centre users are more cautious – or perhaps more realistic. For many, a tier-zero of flash is ideal, but all-flash solid state arrays are a step too far. To them, not only will disk remain the medium of choice for near-line data, but its performance characteristics will also make it a better choice than flash for many tier-one applications.

Built to last

So if you are looking at what kind of arrays to buy next, how do you choose between all-flash and hybrid? Can we do without spinning disks yet?

“In a word, no, unless you have a specific use case that only requires performance and not medium- to long-term data retention,” says Richard Blanford, managing director of IT integrator Fordway, which has implemented different array technologies for customers including all-flash systems.

“For most organisations, the best solution is a hybrid array or an intelligent file system overlaid over separate components. As with most things the Pareto principle applies. A good ratio for most organisations today will be something like 20 per cent flash capacity for active data and 80 per cent disk for longer-term data retention, backup and archiving.”

Similarly, storage developer X-IO Technologies recently introduced an all-flash array which received top marks for its price-performance from standards body the Storage Performance Council. Yet its strategy and comms veep Gavin McLaughlin agrees that disk is far from dead.

“We are seeing a lot of people using a sledgehammer to crack a nut and going all-flash, but hard disk can actually be better for some workloads, for example media streaming,” he says.

“Flash has been marketed as a solution, as has de-duplication, but it's not appropriate in all cases. For instance, we are finding in projects where the discussion is hybrid versus flash that the issue is latency. You can get the same IOPS from hybrid.

“I get frustrated with people saying 'Flash is the future’. Hang on, I haven't told you my workload yet! I would recommend different products for different applications, for example VDI [virtual desktop infrastructure] is a classic use for hybrid, where you have real I/O requirements.

“If it's Microsoft Exchange or media streaming, hard disks are great because they're stable and they have the capacity. But for a real-time or trading application, say, flash is the way to go.”

Hard decisions

It makes a lot of sense to retain hard disk, then – after all, it is a mature technology that continues to evolve – for a range of applications. That could be anything that does not need the performance of flash or involves cold data, for example backups, snapshots, near-line archives or writing tiny appends to a logfile, a job that would normally be woefully unsuited to flash.

However, flash capacities continue to grow and costs continue to fall. Also, every operating system that matters now knows how to mitigate flash's peculiarities, such as the way the individual cells wear out if they are repeatedly written to, and the phenomenon of write amplification which arises from the way flash must be erased in entire blocks, not cell by cell.

So the hybrid advantage will undoubtedly reduce with time. In particular, hard disks face the problem that while capacity continues to grow, with multi-terabyte drives now readily available, each drive is still a single spindle with a single data path on and off the disks.

Even if we double the data rate by adding extra read/write heads, we will still have the issues of bandwidth, latency – the time we must wait for our desired sector to spin under the head – and coping with failures.

(Of course we will stripe or mirror our data, but if we do lose a drive, we now have far more to rebuild than we used to. There are clever proprietary technologies to speed the process but the data still has to be moved one way or another, and the chance of hitting another error before you finish the job gets ever larger as disk capacity increases.)

There is also the power issue. Leaving aside specialist near-line arrays that spin down drives when they are not being accessed, hard disks consume a lot of power, much of which they then emit as heat. For example, HP claims that by using solid-state disks (SSDs) instead of spinning ones, its all-flash 3PAR StoreServ array uses 76 per cent less power to deliver five times better latency.

At this level of performance, flash also lets you deliver more bandwidth with fewer “drives”, because you are no longer limited by the speed that the head can read stuff off a spinning platter. (Before flash came along, this meant you might need absurd numbers of mostly empty hard disks in an array, each one there just to add a few more IOPS to the overall array performance.)

Conversely, flash presents a challenge because not all SSDs are equal. There are several different flash technologies, and most are also available in different grades.

At the high end, a hyperconverged appliance might incorporate PCIe SSDs containing enterprise-grade single-level cell flash, which is faster and has better endurance than the cheaper and higher-capacity multi-level cell (MLC) flash used in consumer applications, but rather more expensive. Then again, there is also enterprise-grade MLC, designed for lower error rates than regular MLC, plus even higher-capacity but slower triple-level cell.

Not surprisingly, the current advice from SNIA luminaries is that not only should you look to hybrid arrays as the new mainstream, but you could also plan to use different SSD types for different needs, just as you probably do now with hard disks. For example, it might be PCIe SSD for for write-intensive work because it has the best write life, and cheaper SATA SSD for read-mostly data.

Next page: No more tiers



Biting the hand that feeds IT © 1998–2018