Original URL: https://www.theregister.com/2011/10/06/storage_deduplication/

Deduplication: a power-hungry way to streamline storage

Data wants to stay single

By Trevor Pott and Iain Thomson

Posted in OSes, 6th October 2011 13:00 GMT

Windows Server 8 is coming, and it is bringing storage enhancements with it.

Data deduplication in particular has caught my eye: it is something I have wanted on my Windows file servers for a long time.

This technology is nothing new; ZFS has had deduplication for a while now, and this technology is (experimentally) available with Linux’s Btrfs as well.

Worth consideration too is Opendedup, bringing deduplication to both Windows and Linux via SDFS.

The quick and dirty on deduplication is that it is an umbrella term for a set of technologies that allow you to store only one copy of a given piece of data on your hard drive, thus saving space and potentially speeding file writes. Essentially, it is single instance storage.

Deduplication can be done at the file level, the block level or the byte level. File and block level are the most common.

Need for speed

It can be done synchronously (as the writes happen) or asynchronously (as a scheduled job during quiet hours.)

Synchronous deduplication takes a lot of CPU power. So much power that high-end filer manufacturers are always clamouring for the fastest possible Xeons, and are pushing forward with research into making use of GPGPU technology.

It’s easy to imagine why. Try to compress 5GB of text files into a zip ball. Now, picture your hard drive as a half-petabyte zip ball that you are reading from and writing to at 10Gbit/s. Processing power is suddenly very important.

Despite this, deduplication is a critical technology. Storage demand has consistently outpaced capacity growth. What’s more, while hard drive capacity has trebled, network I/O and disk speeds have not.

This has potentially disastrous implications for both Raid rebuild times and backups. Deduplication can reduce the amount of information to Raid or backup, helping to ensure both of these processes occur in timeframes compatible with business needs.

Risky business

This is assuming that you are backing up the deduplicated blocks instead of the full file set. There are arguments for and against both.

Backing up the deduplicated blocks means less backup media is required and less bandwidth has to be set aside to perform the backups. On the other hand, it can increase restore times dramatically, as the entire set of backup media is now hopelessly interdependent.

Most people won’t back up data as deduplicated blocks – it is just too risky. The loss of one piece of backup media can render data irretrievable on all other media. This means budgeting backup bandwidth for the fully undeduplicated data to run every night.

You also have to budget your storage I/O bandwidth for the undeduplicated data size, not the size as it is stored on disk. The amount of data on disk may change only by a few dozen gigabytes a day, but the total storage I/O off that system could be measured in dozens of terabytes.

Mind the gap

Deduplication is necessary, increasingly so as the gap between storage demand and availability grows. But it doesn’t help decrease the need for network bandwidth, and it imposes a hefty processing requirement.

My next filer looks like its going to have a pair of top end Xeons and 10GBE. It will need to have two 10GbE ports, as I need to allow for MPIO.

Factor in sizing the filer to deal with demand peaks, ability to support snapshots, previous versions and other fun features, and the thought of planning my next storage refresh gives me a headache.

Difficult or no, time must be taken to do the research. The cost of storage and its attendant networking is such that few among us can afford to get it wrong. ®