Feeds

Deduplication: a power-hungry way to streamline storage

Data wants to stay single

Boost IT visibility and business value

Windows Server 8 is coming, and it is bringing storage enhancements with it.

Data deduplication in particular has caught my eye: it is something I have wanted on my Windows file servers for a long time.

This technology is nothing new; ZFS has had deduplication for a while now, and this technology is (experimentally) available with Linux’s Btrfs as well.

Worth consideration too is Opendedup, bringing deduplication to both Windows and Linux via SDFS.

The quick and dirty on deduplication is that it is an umbrella term for a set of technologies that allow you to store only one copy of a given piece of data on your hard drive, thus saving space and potentially speeding file writes. Essentially, it is single instance storage.

Deduplication can be done at the file level, the block level or the byte level. File and block level are the most common.

Need for speed

It can be done synchronously (as the writes happen) or asynchronously (as a scheduled job during quiet hours.)

Synchronous deduplication takes a lot of CPU power. So much power that high-end filer manufacturers are always clamouring for the fastest possible Xeons, and are pushing forward with research into making use of GPGPU technology.

It’s easy to imagine why. Try to compress 5GB of text files into a zip ball. Now, picture your hard drive as a half-petabyte zip ball that you are reading from and writing to at 10Gbit/s. Processing power is suddenly very important.

Despite this, deduplication is a critical technology. Storage demand has consistently outpaced capacity growth. What’s more, while hard drive capacity has trebled, network I/O and disk speeds have not.

This has potentially disastrous implications for both Raid rebuild times and backups. Deduplication can reduce the amount of information to Raid or backup, helping to ensure both of these processes occur in timeframes compatible with business needs.

Risky business

This is assuming that you are backing up the deduplicated blocks instead of the full file set. There are arguments for and against both.

Backing up the deduplicated blocks means less backup media is required and less bandwidth has to be set aside to perform the backups. On the other hand, it can increase restore times dramatically, as the entire set of backup media is now hopelessly interdependent.

Most people won’t back up data as deduplicated blocks – it is just too risky. The loss of one piece of backup media can render data irretrievable on all other media. This means budgeting backup bandwidth for the fully undeduplicated data to run every night.

You also have to budget your storage I/O bandwidth for the undeduplicated data size, not the size as it is stored on disk. The amount of data on disk may change only by a few dozen gigabytes a day, but the total storage I/O off that system could be measured in dozens of terabytes.

Mind the gap

Deduplication is necessary, increasingly so as the gap between storage demand and availability grows. But it doesn’t help decrease the need for network bandwidth, and it imposes a hefty processing requirement.

My next filer looks like its going to have a pair of top end Xeons and 10GBE. It will need to have two 10GbE ports, as I need to allow for MPIO.

Factor in sizing the filer to deal with demand peaks, ability to support snapshots, previous versions and other fun features, and the thought of planning my next storage refresh gives me a headache.

Difficult or no, time must be taken to do the research. The cost of storage and its attendant networking is such that few among us can afford to get it wrong. ®

The essential guide to IT transformation

More from The Register

next story
Top Gun display for your CAR: Heads-up fighter pilot tech
Sadly Navdy kit doesn't include Sidewinder missile to blast traffic
FEAST YOUR EYES: Samsung's Galaxy Alpha has an 'entirely new appearance'
Wow, it looks like nothing else on the market, for sure
YES YES YES! Apple patents mousy, pressure-sensing iVibrator
Fanbois prepare to experience the great Cupertin-O
TV transport tech, part 1: From server to sofa at the touch of a button
You won't believe how much goes into today's telly tech
NVIDIA claims first 64-bit ARMv8 SoC for Androids
Mile-High 'Denver' Tegra K1 successor said to rival PC performance
XBOX One will learn to play media from USB and DLNA sources
Hang on? Aren't those file formats you hardly ever see outside torrents?
Giving your old Tesco Hudl to Auntie June? READ THIS FIRST
You can never wipe supermarket slab clean enough
Intel admits: Broadwell Core M chip looking a bit thin, no fans found at all
Chipzilla's 'cool' 14nm part to hit market this year
prev story

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
Solving today's distributed Big Data backup challenges
Enable IT efficiency and allow a firm to access and reuse corporate information for competitive advantage, ultimately changing business outcomes.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.