This article is more than 1 year old

Hybrid storage arrays versus all-flash: will disks hold their own?

How to match your storage to your needs

No more tiers

“You also need to ask about longevity,” says McLaughlin. “Some people are fine with consumer-grade flash and swapping the drives out every two years to get the cost benefits.

“But a cloud service provider might want storage that just sits in the corner at a co-lo data centre and needs next to no maintenance. In that case it's often hard disk, surprisingly enough, or else enterprise-grade flash.

“Remember that the media type is a tool, not necessarily a solution. People say they need a feature, such as RAID 6. We say 'Why? If we could solve that business requirement another way, do you still need RAID 6?'”

Prassl also concedes that hybrid is better in some cases, although he argues that it is more to do with the size of the organisation and its IT capabilities. He suggests that even when SSAs are islands in a larger disk-based sea, their performance will outweigh the additional management overhead.

"The small and mid-size market won't settle for two types of storage”

“I really believe that tiered storage systems are strong in the mid-market, such as Nimble, and that's where all-flash will struggle because the small and mid-size market won't settle for two types of storage,” he says.

“But in the large environment, tiered systems don't make sense. It is more efficient and less of a management challenge to have two separate storage pools, individually optimised for capacity and performance.”

In addition, where you set the boundaries of hybrid storage can depend on the technology. Many of today's hybrid arrays were originally disk-based products which now have a tier of flash, but we also have systems that are optimised for flash and now have tiers of spinning disk to allow them to target a wider spread of applications. Needless to say, their performance characteristics can vary considerably.

One further consideration is that your flash tier does not have to go into the storage system. If you have the server skills to do it, you can also implement host-based flash using PCIe SSDs. This can be better than flash in an array for high-performance applications, especially if you can fit the whole application into server flash, but it does make the data protection aspect rather more complex to organise.

“Things are definitely changing,” says Blanford. “Today when we sell new consolidated storage we use flash for active data read/write, and as a result we’re seeing 10K [RPM] and 15K disks start to disappear.

“A few years ago we provided a public sector organisation with a storage solution comprising lots of 15K drives and automated tiering. They’re currently upgrading their storage, and in the upgrade the 15K disks are being replaced with flash as the primary storage, with SATA disk for secondary storage.

“So disk is now taking the role that used to be filled by tape for secondary storage and long-term dynamic archive. Tape still has a role for long-term static data archiving, but we’re seeing use of 7K and 5K SATA disks for passive data storage, with a push to increase their capacity and reduce power utilisation. In effect, with flash we’ve either inserted another tier of storage or replaced 15K disks.

“Cost is also an important factor. The cost per TB of a 4TB or 6TB disk drive is significantly less than the cost of flash, even with de-duplication. This will not continue indefinitely, though, as flash costs are falling and capacities rising, plus there are limits to the data density you can achieve on disk platters. As they get closer to their mechanical limits this will also drive the move to flash.”

Incredible shrinking data

A caveat here is that the flash crew has also leveraged other technologies to bring the cost down towards hard-drive levels. The big ones are de-duplication and compression – flash is so fast with so little latency that it makes it feasible to data-reduce primary storage on the fly.

As an example, HP claims that this can reduce both footprint and power consumption by 80 per cent, potentially replacing four racks of high-end storage with a quarter-rack array.

However, while this should work well for many applications, there may be issues in regulated industries, where the regulators still refuse to accept de-duplicated and rehydrated data as a genuine copy of the original. And more generally, the compression ratio is not a given, with some data types much less compressible than others.

“Different datasets respond differently to different data reduction technologies,” says Vaughn Stewart, the chief technical evangelist at Pure Storage.

“De-dupe is great for files and pictures but not databases, for instance, while compression works well on databases but not on images or operating system binaries. That mean that the more technologies a platform has and the more granular they are, the more it can provide.

“The question then becomes, if you can get to price equality, would you rather have disk or flash?

"I think that in two years time we will see all-flash in the tier-two capacity storage market too: it's a tenth of the power and a tenth of the footprint, so that's more storage per rack or floor tile.” ®

More about

TIP US OFF

Send us news


Other stories you might like