Feeds

Go forth and deduplicate

Will it benefit my data centre?

  • alert
  • submit to reddit

Maximizing your infrastructure through virtualization

Deep dive El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA will deliver a comprehensive introduction to basic storage networking concepts. This month the SNIA examines data deduplication.

This article, derived from existing SNIA material, describes the different places where deduplication can be done; explores the differences between compression, single-instance files, and deduplication; and looks at the different ways sub-file level deduplication can be carried out. It also explains what kind of data is well-suited to deduplication, and what is not.

Introduction

Data deduplication has become a very popular topic and commercial offering in the storage industry because of its potential for very large reductions in acquisition and running costs, as well as the increases in efficiency that it offers. With the explosive growth of data, nearly half of all data-centre managers rate it as one of their top three challenges. According to a recent Gartner survey, data deduplication offers an easy route for relieving pressures on storage budgets and coping with additional growth.

While seen as primarily a capacity-optimisation technology, deduplication also brings performance benefits – with less data stored, there is less data to move.

Deduplication technologies are offered at various points in the data life cycle, from source deduplication, deduplication of data in transit, and deduplication of data at rest at the storage destination. The technologies are also being applied at all storage tiers: backup, archive, and primary storage.

Deduplication explained

Regardless of what method is used, deduplication (often shortened to "dedupe") is the process of recognizing identical data at various levels of granularity, and replacing it with pointers to shared copies in order to save both storage space and the bandwidth required to move this data.

The deduplication process includes tracking and identifying all the eliminated duplicate data, and identifying and storing only data that is new and unique. The end user of the data should be completely unaware that the data may have been deduplicated and reconstituted many times in its life.

There are different ways of deduplicating data. Single Instance Storage (SIS) is a form of deduplication at the file or object level; duplicate copies are replaced by one instance with pointers to the original file or object.

Sub-file data deduplication operates at a more granular level than the file or object. Two flavours of this technology are commonly found: fixed-block deduplication, where data is broken into fixed length sections or blocks, and variable-length segments, where data is deduplicated based on a sliding window€.

Compression is the encoding of data to reduce its size; it can also be applied to data once it is deduplicated to further reduce storage consumption. Deduplication and compression are different and complementary – for example, data may deduplicate well but compress poorly.

In addition, deduplicating data can be performed as an in-line process; i.e., as the data is being written to the target, or post-processed once the data has been written and is at rest on disk.

An example of deduplication

As a simplified example of deduplication, let's say we have two objects or files made up of blocks. These are depicted in the diagram below. The objects or files can be variable or window-based segments, fixed blocks, or collections of files – the same principle applies. Each object in this example contains blocks identified here by letters of the alphabet.

SNIA deduplication diagram

Sub-file level data deduplication (SNIA)

The first object is made up of blocks ABCZDYEF, the second of blocks ABDGHJECF; therefore the common blocks are ABCDEF. The original data would have taken eight plus nine blocks, for a total of 17 blocks. The deduplicated data requires just two blocks (Z and Y) plus three blocks (G, H and J) for the unique blocks in each object, and six for common blocks, plus some overhead for pointers and other data to help rehydrate, for€“ a total of 11 blocks.

If we add a third file, say a modification of the first file after an edit to XBCZDYEF, then only one new block (X) is required. Twelve blocks and pointers are sufficient to store all the information needed for these three different objects. Compression can further reduce the deduplicated data and, depending on the type of data, typically a further reduction up to 50 per cent of the original size can be achieved. The original 17 blocks in this example would then be reduced to six or so blocks.

The Power of One eBook: Top reasons to choose HP BladeSystem

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
The triumph of VVOL: Everyone's jumping into bed with VMware
'Bandwagon'? Yes, we're on it and so what, say big dogs
Carbon tax repeal won't see data centre operators cut prices
Rackspace says electricity isn't a major cost, Equinix promises 'no levy'
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Application security programs and practises
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Securing Web Applications Made Simple and Scalable
Learn how automated security testing can provide a simple and scalable way to protect your web applications.