All-flash arrays helped you escape the legacy storage world. Now it's time to kick it up a gear

A flashy fix for the latency intolerant

Sponsored Applications and databases are increasingly latency intolerant. For a while it seemed that the latency problem was solved with mainstream adoption of AFA as a primary storage platform, but scale and new demands have presented fresh challenges and meant the initial flash fix was temporary.

That's a problem as we enter the zettabyte era and organisations suck up data at a staggering rate along with new data types and data from new sources. At the apex of this is IoT and AI, pushing higher of volumes of data written in different formats into the organisation, while the ML training models prove to have an insatiable appetite for more.

But there are a number of factors putting the brakes on how the data feeding such systems can be managed, stored and analysed. Among them, the mixed nature of workloads and the massively distributed IT infrastructure environments across on-prem and multiple clouds.

It's critical that the brakes come off, otherwise the benefits of new applications like AI could not be realised. The kinds of multi-petabyte data sets needed to train AI and Machine Learning models cannot suffer unacceptable latency caused by limitations of the hardware. As Facebook noted, machine learning "will result in … an increased network bandwidth for data access as well. So, significant local/nearby storage is required to allow offline bulk data transfers from distant regions to avoid stalling the training pipelines waiting for additional example data."

The door to flash is therefore open but it's imperative that this platform delivers the speed and performance expected, otherwise it risks becoming part of the problem.

The race to innovate in AFA is clearly on, though not everyone is up to speed. According to a report here from ESG: "The ability to consolidate mixed workloads and functions onto a single all-flash storage system has proven to provide significant TCO benefits if an organisation's performance, reliability, and operational requirements can be met. While many storage vendors offer all-flash solutions, the design decisions and trade-offs made by these vendors can result in very different system capabilities and ultimately trade-offs in benefits to an organisation.”

Not all AFAs are the same

s

One vendor in the race is Huawei, which has invested heavily in research and development from the operating system and SSDs to algorithms, to deliver advances in performance at scale.

The result is the Huawei OceanStor Dorado V3. This system leverages a Non-Volatile Memory Express (NVMe) architecture that reduces latency in accessing NVMe-based storage and that ensures high availability. The NVMe-based architecture supports direct communication between the CPU and NVMe SSDs, eliminating the need for SCSI-SAS conversion and shortening the data transmission path to reduce end-to-end latency. The system also uses FlashLink technology to synchronise the data layout between SSDs and controllers to drive low levels of latency.

Duplication and garbage data can become serious issues when data is being ingested at great volume, problems that can lower system performance and also drive up costs. Inline deduplication and compression technologies, however, can release the storage capacity occupied by redundant data. Huawei addresses these challenges through its SmartDedupe (intelligent inline deduplication), SmartCompression (intelligent inline compression) and SmartThin (intelligent thin provisioning).

These help the OceanStor Dorado V3 deliver a data reduction ratio up to 5:1, a fact that helps lower power consumption, ensure greater efficiency of cooling and – as a result – help cut end-to-end OPEX by 75 per cent. Also, inline deduplication and compression can be separately enabled and disabled to better suit specific application requirements.

At Mobile World Congress 2019 Huawei went further with the launch of the OceanStor Dorado3000 V3 all-flash storage system that consolidates enterprise-class reliability. The storage system allows those in finance, manufacturing and the carrier sector who are running databases, Virtual Desktop Infrastructures (VDI), Virtual Server Infrastructures (VSI) or SAP HANA systems to transition to all-flash.

Data acceleration

Many all-flash storage products are based on traditional storage systems, so are unable to take full advantage of the capabilities of SSDs. Huawei, however, has developed its own SSDs. Each has a flash translation layer module with Huawei having offloaded the flash translation layer to the hardware to improve performance.

When it comes to data acceleration, three types of intelligent chips come into play with Huawei: intelligent multi-protocol interface chip, intelligent SSD controller chip, and intelligent BMC management chip.

The intelligent multi-protocol interface chip provides 32 Gbit/s FC and 100GE front-end protocols. It also supports protocol parsing, something that used to be completed by general-purpose CPUs and that helps accelerate the front-end access speed by 20 per cent.

The intelligent SSD controller chip bears the core Flash Translation Layer algorithm, accelerating data access within SSDs at 80 μs latency for data reads. The redirect on write (ROW) technology used on flash-oriented operating system maintains performance after snapshotting is enabled.

The BMC management chip provides intelligent and comprehensive fault management, with fault location accuracy up to 93 per cent, thereby helping shorten fault-recovery times from two hours to 10 minutes.

Further on the software side, the OceanStor Dorado V3 uses Huawei's FlashLink intelligent algorithms to power the intelligent chips, among other components, and adjust the data layout between SSDs and controllers.

All told, this OceanStor Dorado V3 package can deliver down to 0.3 ms latency through its use of intelligent chips, NVMe architectures, and FlashLink intelligent algorithms to optimise SSDs and controllers.

Testing for the future

The corporate storage environment does not stand still and a growing number of new data types, interactivity, throughput communications and deadline flows demand a storage infrastructure that is predictable, scalable, and powerful. Data-centre operators, meanwhile, are already deploying networking infrastructure capable of 40GB and 100GB while looking to Ethernet speeds of 200GB and 400GB over the next 2-3 years.

All this makes testing new systems a challenge. Also, there exists no standard set of test conditions, meaning vendors may publish great performance stats for their systems but the parameters are unclear, making it difficult to accurately assess actual performance.

In a five-year TCO analysis, however, ESG succeeded in highlighting the economic benefits of Huawei OceanStor Dorado V3 against hybrid and first-generation all-flash storage systems from major vendors. Looking at OLTP and email workloads, the system serviced about 105,000 sustained IOPS at an average response time of just 0.3 ms.

ESG Lab employed VMware vCenter and the Huawei OceanStor DeviceManager interface to manage and monitor the deployed applications with the results verified using output files and logs.

The system also comes with Huawei's Flash Ever program – which includes the company's Effective Capacity Guarantee Service, Ever New Device Service and Intelligent O&M Service.

Flash grows up

When flash storage technology was first deployed at an enterprise level, it was used to provide high-performance, super-fast IOPS in hybrid environments.

Today, however, the standard storage architecture for high-performance applications is growing - meaning the demand for flash performance is growing, too. With its advances in SSD, and software Huawei goes a long way towards meeting the challenges that can persist - in spite of flash.

Sponsored By Huawei

Sponsored: Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance




Biting the hand that feeds IT © 1998–2019