Feeds

Go forth and deduplicate

Will it benefit my data centre?

  • alert
  • submit to reddit

Internet Security Threat Report 2014

Deep dive El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA will deliver a comprehensive introduction to basic storage networking concepts. This month the SNIA examines data deduplication.

This article, derived from existing SNIA material, describes the different places where deduplication can be done; explores the differences between compression, single-instance files, and deduplication; and looks at the different ways sub-file level deduplication can be carried out. It also explains what kind of data is well-suited to deduplication, and what is not.

Introduction

Data deduplication has become a very popular topic and commercial offering in the storage industry because of its potential for very large reductions in acquisition and running costs, as well as the increases in efficiency that it offers. With the explosive growth of data, nearly half of all data-centre managers rate it as one of their top three challenges. According to a recent Gartner survey, data deduplication offers an easy route for relieving pressures on storage budgets and coping with additional growth.

While seen as primarily a capacity-optimisation technology, deduplication also brings performance benefits – with less data stored, there is less data to move.

Deduplication technologies are offered at various points in the data life cycle, from source deduplication, deduplication of data in transit, and deduplication of data at rest at the storage destination. The technologies are also being applied at all storage tiers: backup, archive, and primary storage.

Deduplication explained

Regardless of what method is used, deduplication (often shortened to "dedupe") is the process of recognizing identical data at various levels of granularity, and replacing it with pointers to shared copies in order to save both storage space and the bandwidth required to move this data.

The deduplication process includes tracking and identifying all the eliminated duplicate data, and identifying and storing only data that is new and unique. The end user of the data should be completely unaware that the data may have been deduplicated and reconstituted many times in its life.

There are different ways of deduplicating data. Single Instance Storage (SIS) is a form of deduplication at the file or object level; duplicate copies are replaced by one instance with pointers to the original file or object.

Sub-file data deduplication operates at a more granular level than the file or object. Two flavours of this technology are commonly found: fixed-block deduplication, where data is broken into fixed length sections or blocks, and variable-length segments, where data is deduplicated based on a sliding window€.

Compression is the encoding of data to reduce its size; it can also be applied to data once it is deduplicated to further reduce storage consumption. Deduplication and compression are different and complementary – for example, data may deduplicate well but compress poorly.

In addition, deduplicating data can be performed as an in-line process; i.e., as the data is being written to the target, or post-processed once the data has been written and is at rest on disk.

An example of deduplication

As a simplified example of deduplication, let's say we have two objects or files made up of blocks. These are depicted in the diagram below. The objects or files can be variable or window-based segments, fixed blocks, or collections of files – the same principle applies. Each object in this example contains blocks identified here by letters of the alphabet.

SNIA deduplication diagram

Sub-file level data deduplication (SNIA)

The first object is made up of blocks ABCZDYEF, the second of blocks ABDGHJECF; therefore the common blocks are ABCDEF. The original data would have taken eight plus nine blocks, for a total of 17 blocks. The deduplicated data requires just two blocks (Z and Y) plus three blocks (G, H and J) for the unique blocks in each object, and six for common blocks, plus some overhead for pointers and other data to help rehydrate, for€“ a total of 11 blocks.

If we add a third file, say a modification of the first file after an edit to XBCZDYEF, then only one new block (X) is required. Twelve blocks and pointers are sufficient to store all the information needed for these three different objects. Compression can further reduce the deduplicated data and, depending on the type of data, typically a further reduction up to 50 per cent of the original size can be achieved. The original 17 blocks in this example would then be reduced to six or so blocks.

Secure remote control for conventional and virtual desktops

More from The Register

next story
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Reducing the cost and complexity of web vulnerability management
How using vulnerability assessments to identify exploitable weaknesses and take corrective action can reduce the risk of hackers finding your site and attacking it.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.