Feeds

How much disruptive innovation does your flash storage rig really need?

Random IO? Or just plain random?

The essential guide to IT transformation

Our technology world is fascinated by disruptive innovation. Every tech startup says its new technology is disruptive and therefore it is bound to succeed.

So it is with all-flash arrays which can answer data requests in microseconds, instead of the milliseconds needed by disk drive arrays.

Startups such as Pure Storage, SolidFire and Violin say they have best-of-breed products in the networked storage array category because they are all-flash with software designed from the ground up to control their arrays.

They provide flash speed at roughly the cost of the fastest performing disks, the 15,000rpm drives. These are many times slower than flash because of their need to move the read/write heads across the surface of the disk platters to the right track and then wait for the right sector to appear under the head as the disk rotates.

If you can't beat them then join them, say the disk-drive array vendors, who have all put SSDs in disk drive slots to create faster reacting storage.

Dell, EMC, Fujitsu, HDS, HP, IBM, NetApp and others have all done this, with some such as EMC and NetApp introducing flash caches as well to speed data on its way.

Not so fast (literally), say the all-flash array startups. The mainstream vendors' arrays with flash storage inside still use disk IO-based control software, legacy stacks of software that assume data is stored in sectors in tracks on platters of spinning disk drives.

"Our software," they will say, "has been designed from the get-go to use flash and be aware that it wears out with repeated writing, unlike disk. It minimises the number of writes by coalescing them and deduplication to get rid of redundant data."

The mainstream disk-drive arrays can't do deduplication at all, or as well, because their disk drives are too slow for all the mapping hash table look-ups needed.

Their disk-based software stacks aren't as efficient at reducing the number of writes, and the upper- and mid-level controller software has to have extra steps inserted in lower-layer code to make the flash storage look like disk to the upper layers.

Lean and mean

This makes the IO processing slower. "Our software is leaner and more efficient," say the vendors.

That this is true is shown by suppliers such as EMC and IBM buying their own all-flash-array startups: XtremIO for EMC and TMS for IBM.

NetApp is developing its own all-flash array called FlashRay, but Dell, HDS and HP have chosen not to go this route. They rely instead on using all-flash array versions of their existing Compellent (Dell) and StoreServ (HP) arrays and saying "our software is good enough to drive the flash hardware effectively and efficiently".

HDS has a flash acceleration sub-system it has developed for its VSP and HUS VM arrays and is saying pretty much the same thing regarding its array controller software. But these three suppliers say something else as well: that their array controller software has a full set of data and array management features that the all-flash array startups don't have.

For example, their arrays can replicate data between them as a way of protecting against an array failure. They can take snapshots of data and store them as another way of protecting against data loss.

They have highly reliable software, strengthened by years of development, which enterprise customers can rely on to store their data safely. Their arrays have controller and other features to ensure there is no single point of failure.

The management facilities of their arrays are mature and well understood by customers and integrated into upper-level or overall IT management frameworks and with virtual server software domains.

This level of data protection and management maturity and integration is too valuable to be simply discarded because there is a new hot box on the street.

Certainly the new all-flash arrays are disruptive but so were Hovercraft and Segway scooters, and neither of these inventions turned out to have any lasting relevance. Innovation on its own is not sufficient to be disruptive. Simply being new is not enough.

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.