Feeds

Go forth and deduplicate

Will it benefit my data centre?

  • alert
  • submit to reddit

Remote control for virtualized desktops

Deep dive El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA will deliver a comprehensive introduction to basic storage networking concepts. This month the SNIA examines data deduplication.

This article, derived from existing SNIA material, describes the different places where deduplication can be done; explores the differences between compression, single-instance files, and deduplication; and looks at the different ways sub-file level deduplication can be carried out. It also explains what kind of data is well-suited to deduplication, and what is not.

Introduction

Data deduplication has become a very popular topic and commercial offering in the storage industry because of its potential for very large reductions in acquisition and running costs, as well as the increases in efficiency that it offers. With the explosive growth of data, nearly half of all data-centre managers rate it as one of their top three challenges. According to a recent Gartner survey, data deduplication offers an easy route for relieving pressures on storage budgets and coping with additional growth.

While seen as primarily a capacity-optimisation technology, deduplication also brings performance benefits – with less data stored, there is less data to move.

Deduplication technologies are offered at various points in the data life cycle, from source deduplication, deduplication of data in transit, and deduplication of data at rest at the storage destination. The technologies are also being applied at all storage tiers: backup, archive, and primary storage.

Deduplication explained

Regardless of what method is used, deduplication (often shortened to "dedupe") is the process of recognizing identical data at various levels of granularity, and replacing it with pointers to shared copies in order to save both storage space and the bandwidth required to move this data.

The deduplication process includes tracking and identifying all the eliminated duplicate data, and identifying and storing only data that is new and unique. The end user of the data should be completely unaware that the data may have been deduplicated and reconstituted many times in its life.

There are different ways of deduplicating data. Single Instance Storage (SIS) is a form of deduplication at the file or object level; duplicate copies are replaced by one instance with pointers to the original file or object.

Sub-file data deduplication operates at a more granular level than the file or object. Two flavours of this technology are commonly found: fixed-block deduplication, where data is broken into fixed length sections or blocks, and variable-length segments, where data is deduplicated based on a sliding window€.

Compression is the encoding of data to reduce its size; it can also be applied to data once it is deduplicated to further reduce storage consumption. Deduplication and compression are different and complementary – for example, data may deduplicate well but compress poorly.

In addition, deduplicating data can be performed as an in-line process; i.e., as the data is being written to the target, or post-processed once the data has been written and is at rest on disk.

An example of deduplication

As a simplified example of deduplication, let's say we have two objects or files made up of blocks. These are depicted in the diagram below. The objects or files can be variable or window-based segments, fixed blocks, or collections of files – the same principle applies. Each object in this example contains blocks identified here by letters of the alphabet.

SNIA deduplication diagram

Sub-file level data deduplication (SNIA)

The first object is made up of blocks ABCZDYEF, the second of blocks ABDGHJECF; therefore the common blocks are ABCDEF. The original data would have taken eight plus nine blocks, for a total of 17 blocks. The deduplicated data requires just two blocks (Z and Y) plus three blocks (G, H and J) for the unique blocks in each object, and six for common blocks, plus some overhead for pointers and other data to help rehydrate, for€“ a total of 11 blocks.

If we add a third file, say a modification of the first file after an edit to XBCZDYEF, then only one new block (X) is required. Twelve blocks and pointers are sufficient to store all the information needed for these three different objects. Compression can further reduce the deduplicated data and, depending on the type of data, typically a further reduction up to 50 per cent of the original size can be achieved. The original 17 blocks in this example would then be reduced to six or so blocks.

Choosing a cloud hosting partner with confidence

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Getting ahead of the compliance curve
Learn about new services that make it easy to discover and manage certificates across the enterprise and how to get ahead of the compliance curve.