This article is more than 1 year old

Go forth and deduplicate

Will it benefit my data centre?

Deep dive El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA will deliver a comprehensive introduction to basic storage networking concepts. This month the SNIA examines data deduplication.

This article, derived from existing SNIA material, describes the different places where deduplication can be done; explores the differences between compression, single-instance files, and deduplication; and looks at the different ways sub-file level deduplication can be carried out. It also explains what kind of data is well-suited to deduplication, and what is not.

Introduction

Data deduplication has become a very popular topic and commercial offering in the storage industry because of its potential for very large reductions in acquisition and running costs, as well as the increases in efficiency that it offers. With the explosive growth of data, nearly half of all data-centre managers rate it as one of their top three challenges. According to a recent Gartner survey, data deduplication offers an easy route for relieving pressures on storage budgets and coping with additional growth.

While seen as primarily a capacity-optimisation technology, deduplication also brings performance benefits – with less data stored, there is less data to move.

Deduplication technologies are offered at various points in the data life cycle, from source deduplication, deduplication of data in transit, and deduplication of data at rest at the storage destination. The technologies are also being applied at all storage tiers: backup, archive, and primary storage.

Deduplication explained

Regardless of what method is used, deduplication (often shortened to "dedupe") is the process of recognizing identical data at various levels of granularity, and replacing it with pointers to shared copies in order to save both storage space and the bandwidth required to move this data.

The deduplication process includes tracking and identifying all the eliminated duplicate data, and identifying and storing only data that is new and unique. The end user of the data should be completely unaware that the data may have been deduplicated and reconstituted many times in its life.

There are different ways of deduplicating data. Single Instance Storage (SIS) is a form of deduplication at the file or object level; duplicate copies are replaced by one instance with pointers to the original file or object.

Sub-file data deduplication operates at a more granular level than the file or object. Two flavours of this technology are commonly found: fixed-block deduplication, where data is broken into fixed length sections or blocks, and variable-length segments, where data is deduplicated based on a sliding window€.

Compression is the encoding of data to reduce its size; it can also be applied to data once it is deduplicated to further reduce storage consumption. Deduplication and compression are different and complementary – for example, data may deduplicate well but compress poorly.

In addition, deduplicating data can be performed as an in-line process; i.e., as the data is being written to the target, or post-processed once the data has been written and is at rest on disk.

An example of deduplication

As a simplified example of deduplication, let's say we have two objects or files made up of blocks. These are depicted in the diagram below. The objects or files can be variable or window-based segments, fixed blocks, or collections of files – the same principle applies. Each object in this example contains blocks identified here by letters of the alphabet.

SNIA deduplication diagram

Sub-file level data deduplication (SNIA)

The first object is made up of blocks ABCZDYEF, the second of blocks ABDGHJECF; therefore the common blocks are ABCDEF. The original data would have taken eight plus nine blocks, for a total of 17 blocks. The deduplicated data requires just two blocks (Z and Y) plus three blocks (G, H and J) for the unique blocks in each object, and six for common blocks, plus some overhead for pointers and other data to help rehydrate, for€“ a total of 11 blocks.

If we add a third file, say a modification of the first file after an edit to XBCZDYEF, then only one new block (X) is required. Twelve blocks and pointers are sufficient to store all the information needed for these three different objects. Compression can further reduce the deduplicated data and, depending on the type of data, typically a further reduction up to 50 per cent of the original size can be achieved. The original 17 blocks in this example would then be reduced to six or so blocks.

More about

TIP US OFF

Send us news


Other stories you might like