How digitalisation will change your storage culture
The rise and rise of flash
How close to reality is the all-flash enterprise data centre?
Depending on infrastructure heritage, appetite and capital available the answer is likely to be: "Closer than you think".
Another question is whether running an all-flash infrastructure is the right choice for your organisation.
Even up to relatively recently an all-flash data centre may not have been viewed as a viable option. But today there are many new considerations that make the case for buying all flash arrays compelling for large, medium and even smaller enterprises.
Dawn of flash
At the dawn of the flash storage era, which for enterprise users was really no more than a few years ago, some people talked in analogies of running racing cars in bicycle lanes.
Flash technology would be restricted to workloads such as High Frequency Trading where the key competitive advantage was the so-called "race to zero" transaction completion time as measured in microseconds. Some expressed that view that hybrid deployments of five per cent flash and 95 per cent disk would be restricted to specialist capital market transaction engines within the data centres of investment banks and trading houses.
"Who needs half a million IOPS?" was not an uncommon question among storage buyers.
There appeared to be many factors stacked against all flash adoption.
Firstly, as NAND technologies entered the supply chain the price differential per gigabyte between SSD and disk storage was vast. In 2010 it was reported that NAND flash memory was due to return to the $1 per gigabyte level, considered then to be the price threshold to make it competitive with HDDs.
Today the forecast is that NAND flash could drop below $100 per Tb within two years and reach price equivalence with disk by 2023.
When flash appeared, it was a time of surging data volumes. It was the time when average annual rate of data growth rose to 50 per cent – 60 per cent and which continues today.
Even in a storage world dominated by the higher capacity and shrinking cost per gigabyte it appeared the volume growth would always outpace the fall in cost per gigabyte of flash making it uneconomic for mass deployment as primary storage.
It was said that this would prohibit widespread adoption and prolong the classic three tier storage architecture of high speed SAS disk for performance and SATA/SAS for volume and tape for archive for decades to come.
Instead the move from 100 per cent disk (1500, 7200 and 10,000 rpm high speed SATA and SAS disk) to hybrid infrastructure (initially configured at five per cent flash and 95 per cent disk then 15 per cent flash then 30 per cent flash) to the deployment all flash arrays (AFAs) has happened far quicker than anyone forecast.
It turned out that the answer to the questions: Which workloads need hundreds of thousands of IOPs and which firms can afford to use flash for high capacity workloads was "more applications and more companies than you might think".
There are several contributing technological factors and one fundamental economic change driving flash uptake.
- The price gap closed more rapidly than predicted
- Demand for greater performance started to rise sharply
- Reliability, operating efficiency and manageability and available capacity issues were addressed
The economic driving force, meanwhile, is digitalisation.
Digitalisation is based on a perfect storm of mobile ubiquity, app based economics and the continuous release cycle of responsive web scale applications that have totally changed consumer expectations. End user and business customer expectations are not far behind.
Consumer digital experiences have shaped performance expectations and enterprise applications are catching up.
According to industry analyst Gartner, digitalisation is: "The use of digital technologies to change a business model and provide new revenue and value-producing opportunities; it is the process of moving to a digital business."
There’s a lot of hype about digitalisation. What does it actually mean in practice? Take a bricks and mortar fashion retailer may be using a traditional storage stack for its supply chain management but it is more likely to be using a flash storage infrastructure for its customer facing web-based applications. Online shoppers don’t dwell!
A bank will embrace a web payments platform otherwise it will struggle to retain customers.
Or there’s your local pizza chain, who is likely to be signed up to one of the big online ordering platforms, which in turn will use flash in the cloud. Convenience is everything for customers. Performance is everything, especially on Friday and Saturday night at peak load times.
In relatively few years, the number of applications requiring flash array features and function has escalated beyond all estimates.
And the expansion of flash is changing our storage culture.
But it’s not an all-flash world out there - it’s mixed, and what some users have found is that mixed environments can lead to significant management complexities, additional cost overheads and resource issues. Where an enterprise wishes to initially invest in a hybrid solution, a common management interface can reduce complexity.
In raw capacity terms, flash appeared to be at a disadvantage as hard disk capacity boomed.
However the application of technologies such as de-duplication, compression and thin provisioning can increase the SSD capacity by a factor of five. Reaching hundreds of terabytes of usable capacity on a single flash array is not uncommon.
Compression/deduplication technology is increasingly being looked at as a way to optimize system resources, helping reduce system TCO by improving the way growing volumes of data are stored
Increasingly, you can use de-dup and compression to address capacity issues. The data size can be reduced using a combination of deduplication and compression, with users cleverly targeting specific volumes.
The payoff is you can run multiple applications using fewer SSDs and tune the all-flash array to manage different data types according to their requirements. This will deliver the same performance when compared to workloads that are spread out across dozens of smaller hard drives.
A planet-sized problem
As has been said, the motorcar was not the result of developing a better horse. Flash is not a better disk or it couldn’t deliver ultra-fast response times that are 500 times faster than those of hard disks.
But this is not simply about raw performance. Flash, being solid state, has no moving parts so is therefore more reliable and less prone to physical fault than disk - making it more reliable. It’s easier, too, from the perspective of systems management, performance tuning and set up.
Amid all the talk of performance and reliability, however, the environmental factor must not be overlooked.
It takes raw power to keep disks spinning and power costs money while energy supplies can be insecure. So the answer to efficient performance is not ever faster and higher capacity spinning disks drawing ever-greater amounts of power. It’s moving to an alternative that either uses less power or consumes it more efficiently.
Because of rising energy costs the power-hungry disk machines therefore hit the data centre on duel fronts of power and cooling.
Taking into account the physical footprint, the raw power and the cooling costs then compared with running high speed disks the operation of flash only equipment can cost as much as 85 per cent less.
Finally according to a report from Wells Fargo Securities, SSD flash will account for nearly 20 per cent of total enterprise storage capacity shipments within three years.
Zetttabytes of data are being stored every year and with annual storage capacity growth expected to continue or even increase then the amount of flash technology being deployed by 2021 could have doubled and doubled again.
The all-flash data centre is coming.