This article is more than 1 year old

Getting to grips with SSD performance

Lies, dammed lies and SSD performance claims

Deep dive Ever been annoyed that solid-state disk (SSD) performance can drop off precipitately once the wretched thing has gone through a few writes, erases and re-writes? That's because the fresh-out-of-the-box (FOB) performance can bear no relation to the steady state performance – none at all. This deep dive explains what's going on and how this tetchy problem can be fixed.

El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA delivers a comprehensive introduction to basic storage networking concepts. This month it looks at SSD performance testing and how we can get a standardised way of comparing different vendors' products.

We say get on to your SSD vendor, once you have read this, and push it to use this testing methodology.

The write stuff

SSDs are becoming increasingly popular in client and enterprise mass-storage applications due to their fast performance, low power consumption, rugged endurance, increased reliability, small physical profile and efficient cost per I/O. However, there is a clear need for SSD performance standards due to idiosyncrasies of NAND flash-based SSD behaviours and market confusion surrounding SSD performance comparisons with hard disk drives.

This article will briefly examine the unique and complex performance characteristics related to NAND flash-based SSDs, best practices requirements for SSD performance test methodologies, conditions relevant to client and enterprise workloads, and an overview of the SNIA enterprise and client SSD performance test specification.

NAND flash-based SSD performance characteristics

NAND flash-based SSD performance is highly dependent on two factors: write history of the SSD, and the hardware and software environment in which the SSD's performance is measured.

Write history

Due in large part to the "virtual mapping" employed by NAND flash SSDs (Physical Block Address (PBA) to Logical Block Address (LBA) mapping), SSD performance is heavily dependent on:

  1. amount and type of data written to the device;
  2. the amount of fragmentation of both the LBAs and PBA look-up tables;
  3. amount of "read, modify, write" activity;
  4. the amount of NAND "over provisioning" for user capacity;
  5. amount of NAND reserved for controller use; and
  6. complexity and efficiencies of the NAND Flash translation layer and controller algorithms (for ECC, garbage collection, compression, and more).

In simple terms, a fresh SSD with ample pre-erased blocks will exhibit relatively high random write performance, which will settle over time as more data is written to the SSD. Thus, both "preconditioning" of the SSD (used as an initial step in testing SSDs (see SNIA PTS below) and the amount and type of use (or "workload") will affect the overall performance of the SSD measured. Independent of the hardware and software environment (see below), it is critical to understand both the write history of the SSD and the type of workload used when discussing or comparing SSD performance.

Hardware and software environment (HSE)

In addition to the write history and workload of the SSD, any performance measurement must consider the Hardware and Software Environment (HSE) in which the SSD operates. This can affect performance in several ways:

  • it may create a bottleneck (if an insufficient amount of demand intensity is generated to feed the SSD – eg, not enough outstanding I/Os to reach peak IOPS);
  • it must be normalised if one is trying to compare performance (one must use the same HSE to test two different SSDs; both test hardware and test software);
  • the demand intensity may be optimal for one type of stimulus and not another (ie, the optimal HSE outstanding I/O - in queue depth and thread count – may be different for different access patterns, thus favouring one type of test over another); and
  • The given HSE may not be representative of the reader's HSE (ie, the SSD tested may not show the same performance measurements when used by the reader in his or her HSE).

SSD performance comparisons

To address these HSE issues, basic SSD test methodology must ensure adequate demand intensity, normalise the HSE by use of a Reference test Platform (RTP), identify and set the test conditions to match the intended workload type, and disclose the HSE used when reporting SSD performance.

For device-level comparison, use of an RTP and standardised test specification (such as the SNIA PTS) allows the test operator to reasonably compare performance between different SSDs at the device level (ie, performance where the influence of the HSE, OS and applications are minimised). In order to compare performance at the file system or application level, use of identical HSE and applications is necessary to ensure comparable performance measurement (such as use of SPC to compare typical Enterprise OLTP, database or email applications at the file system).

Best practices for SSD performance test – performance test specification

Given the high dependency of SSD performance on write history and HSE, the SNIA Solid State Storage Initiative (SSSI) and SSS Technical Working Group (TWG) have developed and released a seminal Solid State Storage Performance Test Specification (PTS) for both Enterprise (E) and Client (C) applications.

Key to both PTS-E and PTS-C is the use of an RTP, prescribed "Pre-Conditioning" (PC) methodologies, workload/use case test settings, measurement during "Steady State" and standardised test metrics and reporting formats. Use of the PTS allows the test sponsor to ensure that the idiosyncrasies of SSD performance are accounted for when making SSD performance measurements.

Workload conditions

To obtain relevant performance measurements, workload conditions are defined as either enterprise or client. As defined in the PTS, enterprise refers to "Servers in data centres, storage arrays, and enterprise wide / multiple user environments that employ direct attached storage, storage attached networks and tiered storage architectures", while client refers to "a single user desktop or laptop system used in home or office."

The PTS sets the test conditions and parameters (such as the preconditioning range, the test active range and the amount and type of segmentation in the test data stimulus) in an attempt to more closely reflect the workload characteristics of enterprise or client use cases. By adjusting these parameters, the tests attempt to account for characteristics such as small block journalling, mixed workloads, latency spikes, data hot zones, locality of reference and active migrating data footprint.

SNIA performance test graphic

SNIA publishes performance test specification

The Solid State Storage Performance Test Specifications look this this basically:

  • PTS-E. PTS-E rev 1.0 sets forth (4) standard performance tests: write saturation (continuous RND 4K write IOPS without preconditioning); and (3) steady state tests (test measurements taken after preconditioning): IOPS (mixed block size and R/W mix for RND and SEQ stimuli), throughput (mixed block size and R/W mix measured in MB/sec) and latency (average and maximum latency measured in msec).
  • PTS-C. PTS-C rev 1.0 has (3) steady state tests: IOPS, throughput and latency tests with conditions adjusted to reflect Client conditions (limited PC range, limited test active range, and defined test stimulus size and segmentation).

Standardised reporting

One of the key benefits of the SNIA PTS is the standardised methodology, metrics, tests and reporting format. Use of the standardised SNIA PTS reporting format ensures that published SSD performance measurements fully disclose the HSE used for testing, the relevant test conditions and settings (preconditioning, test range, access pattern, outstanding IO), and reports the results of the standard tests in a uniform format for easy audit and comparison.

Conclusion

SSDs are increasingly being designed into the mass storage ecosystem, both in the enterprise and client environments. Understanding and planning for SSD performance requires the creation, implementation and use of performance standards to ensure that relevant, reliable and repeatable comparison measurements are available for the end user, integrator and designer. As the performance of NAND Flash SSDs continue to increase, adherence to commonly accepted industry performance standards will allow for the widespread adoption of SSDs in today's and tomorrow's computing environment.

The recent publication of the SNIA PTS is a significant and important step in this effort. To read more about SNIA’s Solid State Storage technical work, activities an upcoming programmes, visit: www.snia-europe.org/en/technology-topics/solid-state-storage/index.cfm.

Bootnote

This article was written by Eden Kim, the Chair of the SNIA’s Solid State Storage Technical Working Group, and an SSSI Governing Board member. He works for Calypso Systems.

For more information on this topic, visit: www.snia.org and www.snia-europe.org.

About the SNIA

The Storage Networking Industry Association (SNIA) is a not-for-profit global organisation, made up of some 400 member companies spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organisations in the management of information. To this end, the SNIA is uniquely committed to delivering standards, education, and services that will propel open storage networking solutions into the broader market.

About SNIA Europe

SNIA Europe educates the market on the evolution and application of storage infrastructure solutions for the data centre through education, knowledge exchange and industry thought leadership. As a Regional Affiliate of SNIA Worldwide, we represent storage product and solutions manufacturers and the channel community across EMEA.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like