Feeds

ZFS gets inline dedupe

Switch it on and off at the dataset level

Choosing a cloud hosting partner with confidence

Sun's Zettabyte File System (ZFS) now has built-in deduplication, making it probably the most space-efficient file system there is.

There's a discussion of ZFS deduplication in a Sun blog, which says that chunks of data, such as a byte range or blocks or files, are checksummed with a hash function and any duplicate chunks will not be stored but instead reference this master chunk.

Sun says that backup data, virtual desktop images, and source-code repositories all have highly redundant data, and that deduplication can reduce disk usage to a fraction of the raw space needed.

File-level deduplication has the lowest processing overhead but is the least efficient method. Block-level dedupe requires more processing power, and is said to be good for virtual machine images. Byte-range dedupe uses the most processing power and is ideal for small pieces of data that may be replicated and are not block-aligned, such as e-mail attachments. Sun reckons such deduplication is best done at the application level since an app would know about the data.

ZFS provides block-level deduplication, using SHA256 hashing, and it maps naturally to ZFS's 256-bit block checksums. The deduplication is done inline, with ZFS assuming it's running with a multi-threaded operating system and on a server with lots of processing power. A multi-core server, in other words.

To turn it on you simply tell ZFS to dedupe a named storage pool, such as a silo, and datasets within it:

zfs set dedup=on silo

zfs set dedup=on silo/mydataset

zfs set dedup=off silo/yourdataset

With data sets containing redundant data, there's a disk-capacity benefit and a disk-write I/O benefit as redundant data isn't written to disk.

You can tell ZFS to do full byte comparisons rather than relying on the hash if you want full security against hash duplicates:

zfs set dedup=verify silo

You can go the other way and use a simpler hashing algorithm to reduce processing overhead and combine it with the verify function to increase overall dedupe speed:

zfs set dedup=fletcher4,verify silo

ZFS's deduplication scales to the size of the filesystem. Once the mapping tables are too large to fit in memory, then dedupe performance will decrease - here's a case where solid state storage might be a good idea.

The beauty of ZFS dedupe is that you don't need special storage arrays to deduplicate data. Ordinary arrays are quite acceptable, and its applicability at a data-set level means that you need only to deduplicate the datasets with redundant data and not the others.

As it is inline deduplication, throwing more processing cores and memory at it makes it go faster. We'll have to wait and see if GreenBytes switches to ZFS dedupe from the technology it's currently using. It will also be interesting to see how ZFS deduplication products compare performance-wise with specialised deduplication storage arrays. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
More alleged private, nude celeb pics appear online
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.