Data storage quicker than a flash, large enough to make servers think 'Got DRAM'... Intel makes this possible

Find out where your workloads fit in tomorrow's memory hierarchies

Sponsored Flash memory has given a welcome boost to enterprise storage over the past decade or so, allowing applications to overcome the limitations of spinning disk storage systems. This has enabled common enterprise workloads such as databases, business intelligence, and online transaction processing to get faster access to data and deliver increased productivity.

But there are signs that this may not be enough for much longer, as the demands of applications – such as big data analytics – continue to grow, and new and emerging workloads place further demands on storage infrastructure. Can new technology approaches, such as storage class memory, provide the answer?

Workloads are now starting to infuse techniques such as AI and machine learning. These may not be widely used in enterprise applications at the moment, however, their use is projected to grow in future, especially when used in combination with other techniques, such as data analytics.

In a report [PDF] from Deloitte Insights in 2018, 82 per cent of early adopters of enterprise AI said they had already seen a return on their investments. Nearly 60 per cent said they were seeking to adopt technologies such as machine learning and natural language processing by embedding them in enterprise software systems, such as CRM or ERP installations.

There is uptake sector by sector. For example, healthcare professionals are looking at using machine-learning techniques to offer patients a speedier and more reliable diagnosis of health conditions, by analyzing huge volumes of medical imaging scans, for example. A June 2019 report by Broadbridge Financial Solutions, meanwhile, found growing interest in a range of scenarios with AI for data mining, post-trade processing, market analytics, and trading systems.

The problem with some of these emerging workloads is that they can be much more data-intensive than traditional applications. Analytics tends to rely on constantly feeding a stream of data to the processor, while AI workloads can have a rather more unpredictable mix of random and sequential reads and writes of various sizes, which traditional storage does not handle well.

Flash has proven to be the answer so far, but there is still an enormous gulf between the speed of flash-based storage and the speed of the computer’s memory (DRAM) where the processing takes place.

This means there is a gap in the market for any new technology that can bridge the divide by delivering higher performance than flash storage, but at a lower cost than DRAM. This last factor is important, since the lower the cost, the more capacity enterprises will be able to fit into servers in order to give a boost to applications.

Memory hierarchy

It is a fact of life that the faster the memory is, typically the more costly it is. This has led to a memory hierarchy, with a small amount of the fastest and most expensive memory at the top and successively larger layers of slower and cheaper memory sitting underneath it.

DRAM sits at the apex as the fastest memory (excluding that on the processor itself), with access latency of somewhere in the region of 10ns to 100ns, so you want the data you are processing to go here. But DRAM is also volatile, losing its contents in the absence of power. This means that data has to be stored elsewhere and transferred into DRAM for processing. Until recently, this meant using hard drives, which are about a million times slower than DRAM.

With the development of high-density flash making flash storage more economical, today’s computer systems have the benefit of solid state drives (SSDs) that can serve in place of hard drives, delivering a boost in performance.

Flash is still more expensive than disk storage, so most enterprises keep frequently accessed data on flash, while the bulk of their data is held on a larger disk layer beneath it in the memory hierarchy. Below this sits an archive layer that may be tape or even cloud-based storage.

The latest innovation in flash is the NVMe storage interface, specifically designed for high-speed storage. However, DRAM is still about a thousand times faster than the speediest flash drives, even when accessed using NVMe.

Mind the memory gap

Storage class memory (SCM) has been put forward as one answer. There are actually a number of technologies classed as SCM due to their characteristics placing them somewhere between DRAM and flash in the memory hierarchy. In other words, they are typically slower than DRAM but less costly, while also being faster but more expensive than flash. SCM varies from exotic sounding technologies, such as Nanotube RAM and Resistive RAM, to the old perennial Phase-Change Memory (PCM). A few of these have made much progress out of the lab and onto the commercial marketplace.

Of those that have, MRAM (also known as Spin Transfer Torque RAM) has potential, as the technology is about as fast as DRAM and is persistent, and thus it does not lose its contents when powered down. However, it currently appears to be costly to manufacture in large capacities, meaning that it has chiefly been used in niche applications, such as non-volatile caches in storage arrays.

Then there are non-volatile DIMMs which typically use memory combined with flash chips, that store the contents of the memory if power is lost. None of these approaches has so far gained broad adoption.

So far, the SCM technology that appears to have the strongest likelihood of success is 3D XPoint®, jointly developed by Intel® and Micron Technology. This has been marketed by Intel® in a line of Optane™ DC SSDs since 2017, and is now also available as Intel® Optane™ DC Persistent Memory modules that slot into standard DIMM sockets inside servers, alongside DRAM modules.

Intel® Optane™ SSDs are already faster than NAND flash SSDs, and Optane™ DIMMs are even faster, which makes the technology still somewhat slower than typical DDR4 DRAM speeds though getting close enough that it can be treated as a slower tier of main memory.

One of the reasons that Optane™ DIMM technology is well placed to succeed is because Intel® has built support for the SCM technology into its Second Generation Xeon® Scalable processors, meaning there will eventually be many servers out there capable of utilizing the technology.

Support for Optane™ DIMMs in the processor memory controller enables them to be operated in two different modes; Memory Mode is seamless and therefore does not require changes to application code, while App Direct Mode is for applications that have been explicitly developed to make use of the two different kinds of memory in the system.

Memory Mode treats the DRAM in the server as a read and write cache for the Intel® Optane™ memory. The idea behind this is that Intel® Optane™ is cheaper than DRAM, and so large-capacity Intel® Optane™ DIMMs (currently available with up to 512GB per DIMM) can be paired with memory DIMMs as a way to increase the effective memory capacity of a server and enable the handling of larger datasets in memory.

This ability to handle larger datasets in memory, and access them faster than having to fetch everything from disk, will prove crucial to a number of industries. In financial services, for example, there is a growing need to analyse more and more market data to spot trends and to make trades faster than rivals can.

App Direct Mode is for software that is aware it has access to a mix of persistent memory and DRAM, and can use the latter for applications requiring low latency, while large data sets or information that requires persistence can be placed in the larger but slower Optane™ memory. Alternatively, App Direct Mode allows it to be accessed using a filesystem API and treated as if it were a small, very fast SSD.

Developer support

Another reason why Optane™ DIMMs have a greater chance of success is that leading enterprise software developers are integrating support for the technology into various platforms and applications. SAP has added support for Optane™ DC Persistent Memory into its SAP HANA in-memory database, which enables dramatically faster data load times during start-up because the data is still in memory from when the system was powered off.

Oracle has demonstrated its TimesTen in-memory database using the Optane™ Application Direct mode, and incorporated support into its Exadata engineered database systems.

VMware has also added support for Optane™ DC Persistent Memory into its vSphere virtualisation platform, supporting Memory Mode for increased memory capacity in workloads as well as allowing customers to deploy applications that make use of App Direct Mode.

It is early days yet for storage class memory, though the technology appears to have great potential not just for boosting applications that require low latency, such as machine-learning inference, but also for delivering a greater memory space for in-memory processing of large data sets.

However, the success or failure of all SCM technologies depends as much upon economics as anything else: if it can be manufactured at a cost per gigabyte that is sufficiently cost effective compared with DRAM, it is likely to be adopted for applications where the extra level of performance counts, and may even become a standard feature of all server systems at some point in the future.

Sponsored by Intel®.

Sponsored: Technical Overview: Exasol Peek Under the Hood

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER




Biting the hand that feeds IT © 1998–2019