This article is more than 1 year old

You're praying your biz won't be preyed upon? Have you heard of our lord and savior NVMe?

And the holy spirit, NVMe-oF?

Comment The amount of data being collected and held in systems is – yes, we know – increasing, as organisations generate and store data for real-time or post-real-time analysis.

One of the drivers behind this is digital transformation. When dealing with their banks, supermarkets, and airlines, people want to experience the same slick and quick interactions they enjoy with Amazon, Netflix, and Uber. If your IT system is stuck in the hard-drive era, and unable to chew through data at the same rate as that web-tier trio then, with apologies to Jack Swigert and Jim Lovell, Houston – you may have a problem.

These internet giants obviously run more virtual machines and access data faster than smaller enterprises saddled with legacy IT. That means, according to Forrester vice president and principal analyst Brian Hopkins, those using legacy tech risk becoming prey for faster-moving rivals.

So how do you stop getting eaten? One way is to transform your data center’s infrastructure, and an increasingly popular means of doing so is by using NVMe flash drives and the NVMe over Fabrics (NVMe-oF) protocol.

Enter Non Volatile Memory Express (NVMe)

According to IDC’s storage research vice president Eric Burgener it won’t be long before NVMe-based all-flash boxes take over and cannibalize the SAS-based all-flash array market. The reason? Growth in workloads that demand the performance offered by NVMe – think customer interactions, applications such as AI inference and advanced analytics, and devops where teams depend on fast iterations. Each of these demand high bandwidth and low latency.

As for NVMe-oF, Burgener reckons a transition will occur during the next three to four years. NVMe-oF takes the gains of performance and latency provided by NVMe and rolls that out over network fabrics such as Ethernet, Fiber Channel, and InfiniBand.

With NVMe-oF, you can reduce latency and increase throughput all the way from the software stack through to the storage array via the data fabric. It “makes sense for enterprises to understand what this technology can do for them so that they can integrate it into their own environments most cost-effectively," according to Burgener.

Secret sauce

The big boost of the NVMe-oF protocol is that it delivers more IO operations in a shorter time, so you can run more applications on bare metal or on virtual machines. Translated to a business perspective, that means more applications and faster services on the same or – hopefully – a reduced server-storage footprint. Of course, external storage can be connected by the same Fiber Channel or Ethernet cables as before, though upgrading the switching to take advantage of NVMe-oF will enable the delivery of vastly more IO operations to your servers because everything is running more efficiently.

If you’re keeping an eye on costs, you should see a fiscal return: increased capabilities with, at minimum, no expansion on server-storage estate, though – ideally – consolidation meaning reduced hardware costs and software licensing with reductions in the associated costs of space, cooling, and power.

Operating-system blocker

At the root of this is the way server operating systems and storage networking technologies have processed storage requests. Historically, this has been slow, and perhaps hobbled your application servers’ performance. That’s important to remember, because NAND flash drives let you feed data faster to a server’s processors and system RAM with data access latency many times lower than disk: a SAS-interface SSD can take, for example, 30,000 nanoseconds compared to the roughly one-million nanoseconds for a disk data access – so 33.3 times faster.

However, the virtualized, multi-socket, multi-core servers in a modern data center demand more – much more: more data to be pumped into memory faster, which means rather than using SAS drives that have a single SAS queue and communicate using a SAS-to-PCI adapter to the PCIe bus, NVMe SSDs can link directly to the PCIe bus. NVMe is multi-queued, with the ability to support 64K separate queues with up to 64K requests.

Why does that matter? A bigger queuing system means computers can conduct more transactions simultaneously. We’re back to maximizing bang for buck: you have the same, or fewer servers, running a greater number of applications or serving more sessions to more users.

And so it is with NVMe. Each drive has an access latency around 150μs, and a single drive can deal with multiple accesses at once. A drive tray with, say, 24 SSDs can deal with many, many more. An NVMe-oF flash array could support many more IO operations per second than a disk array with the same number of drives. It is staggeringly more efficient at storing and delivering data than spinning disk arrays.

NVMe-oF

So, you have NVMe drives housed in an external storage array or storage server, and these systems can share capacity and responsiveness with servers. But your arrays, at least where block data access is concerned, are typically connected using Fiber Channel or iSCSI over Ethernet, which are much slower than the PCIe bus – and that’s a problem.

That’s because you have a complex IO stack: an IO request is made by an application going into the host operating system’s storage stack, then the Fiber Channel or iSCSI drivers, the network link, the array drive controller, the internal array network, and then the actual drives.

That takes time. Now, though, at least some of these steps can be avoided by extending the NVMe-oF protocol to tightly couple solid-state storage to a server or array controller’s processor and RAM via the PCIe bus and storage network fabric. This is important because when it comes to data-intensive applications, you are reducing the latency inherent in this multi-step process thereby serving applications faster.

Engineers found a way to do this by using remote direct memory access (RDMA) technology: it can connect servers and storage using the NVMe-oF protocol across a storage network. By using RDMA and NVMe-oF, storage requests no longer need to pass through the host operating system’s traditional storage IO stack or other controller hardware to reach a drive over the network. The speed is incredible, adding less than 10 microseconds of latency compared with a local directly-connected NVMe SSD.

What this means for you is that a 24-drive flash array connected to a set of application servers using NVMe-oF can satisfy many more IO requests than a 24 x flash drive SAS-connected array – and many, many more than a SAS-connected 24 x hard disk drive array.

An NVMe-oF storage system therefore means greater virtual-machine density in servers because it removes the data-delivery bottlenecks that would hold back the server’s processing cores. We’re back to talking about improved response times and the ability to run more applications: that means customer-facing and data-intensive applications able to scale and meet demand.

The future

What does this mean for your digital or big-data future? NVMe and NVMe-oF, at a fundamental level, mean a future-proofed storage layer, which will be the bedrock of your digitized business.

The combination of performance, capacity, and availability should mean faster throughput and lower latency for a new generation of applications. They mean, too, not just raw performance and reduced latency, but also greater flexibility. Let's say you begin pooling storage virtually: with NVMe and NVMe-oF, you get the throughput to make all those SAN and NAS systems appear, act, and serve as one single system. That is of particular benefit if you’re considering a move to a hyperconverged and software-defined infrastructure as Gartner reckons a fifth of shared accelerated storage products will be based on NVMe by 2021.

There are other benefits, too: a consolidated SAN or NAS estate working harder that will translate into reduced software licensing costs, and savings in hardware, power, space, and cooling.

Together, NVMe and NVMe-oF promise to disrupt the data center. With more products expected from vendors as the market grows, IT leaders should now start planning the workloads to move and how to architect for a smaller, higher-capacity, and lower-cost data center. ®

More about

TIP US OFF

Send us news


Other stories you might like