Feeds

Deduplication: a power-hungry way to streamline storage

Data wants to stay single

Top 5 reasons to deploy VMware with Tegile

Windows Server 8 is coming, and it is bringing storage enhancements with it.

Data deduplication in particular has caught my eye: it is something I have wanted on my Windows file servers for a long time.

This technology is nothing new; ZFS has had deduplication for a while now, and this technology is (experimentally) available with Linux’s Btrfs as well.

Worth consideration too is Opendedup, bringing deduplication to both Windows and Linux via SDFS.

The quick and dirty on deduplication is that it is an umbrella term for a set of technologies that allow you to store only one copy of a given piece of data on your hard drive, thus saving space and potentially speeding file writes. Essentially, it is single instance storage.

Deduplication can be done at the file level, the block level or the byte level. File and block level are the most common.

Need for speed

It can be done synchronously (as the writes happen) or asynchronously (as a scheduled job during quiet hours.)

Synchronous deduplication takes a lot of CPU power. So much power that high-end filer manufacturers are always clamouring for the fastest possible Xeons, and are pushing forward with research into making use of GPGPU technology.

It’s easy to imagine why. Try to compress 5GB of text files into a zip ball. Now, picture your hard drive as a half-petabyte zip ball that you are reading from and writing to at 10Gbit/s. Processing power is suddenly very important.

Despite this, deduplication is a critical technology. Storage demand has consistently outpaced capacity growth. What’s more, while hard drive capacity has trebled, network I/O and disk speeds have not.

This has potentially disastrous implications for both Raid rebuild times and backups. Deduplication can reduce the amount of information to Raid or backup, helping to ensure both of these processes occur in timeframes compatible with business needs.

Risky business

This is assuming that you are backing up the deduplicated blocks instead of the full file set. There are arguments for and against both.

Backing up the deduplicated blocks means less backup media is required and less bandwidth has to be set aside to perform the backups. On the other hand, it can increase restore times dramatically, as the entire set of backup media is now hopelessly interdependent.

Most people won’t back up data as deduplicated blocks – it is just too risky. The loss of one piece of backup media can render data irretrievable on all other media. This means budgeting backup bandwidth for the fully undeduplicated data to run every night.

You also have to budget your storage I/O bandwidth for the undeduplicated data size, not the size as it is stored on disk. The amount of data on disk may change only by a few dozen gigabytes a day, but the total storage I/O off that system could be measured in dozens of terabytes.

Mind the gap

Deduplication is necessary, increasingly so as the gap between storage demand and availability grows. But it doesn’t help decrease the need for network bandwidth, and it imposes a hefty processing requirement.

My next filer looks like its going to have a pair of top end Xeons and 10GBE. It will need to have two 10GbE ports, as I need to allow for MPIO.

Factor in sizing the filer to deal with demand peaks, ability to support snapshots, previous versions and other fun features, and the thought of planning my next storage refresh gives me a headache.

Difficult or no, time must be taken to do the research. The cost of storage and its attendant networking is such that few among us can afford to get it wrong. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Oi, Tim Cook. Apple Watch. I DARE you to tell me, IN PERSON, that it's secure
State attorney demands Apple CEO bows the knee to him
Phones 4u website DIES as wounded mobe retailer struggles to stay above water
Founder blames 'ruthless network partners' for implosion
Hey, Mac fanbois. HGST wants you drooling over its HUGE desktop RACK
What vast digital media repository could possibly need 64 TERABYTES?
Soundbites: News in brief from the Wi-Fi audiophile files
DTS and Sonos sing out but not off the same hymnsheet
In a spin: Samsung accuses LG exec of washing machine SABOTAGE
Rival electronic giant tries to iron out allegations
Your chance to WIN the WORLD'S ONLY HANDHELD ZX SPECTRUM
Reg staff not allowed to enter, god dammit
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.