Feeds

ZFS gets inline dedupe

Switch it on and off at the dataset level

Combat fraud and increase customer satisfaction

Sun's Zettabyte File System (ZFS) now has built-in deduplication, making it probably the most space-efficient file system there is.

There's a discussion of ZFS deduplication in a Sun blog, which says that chunks of data, such as a byte range or blocks or files, are checksummed with a hash function and any duplicate chunks will not be stored but instead reference this master chunk.

Sun says that backup data, virtual desktop images, and source-code repositories all have highly redundant data, and that deduplication can reduce disk usage to a fraction of the raw space needed.

File-level deduplication has the lowest processing overhead but is the least efficient method. Block-level dedupe requires more processing power, and is said to be good for virtual machine images. Byte-range dedupe uses the most processing power and is ideal for small pieces of data that may be replicated and are not block-aligned, such as e-mail attachments. Sun reckons such deduplication is best done at the application level since an app would know about the data.

ZFS provides block-level deduplication, using SHA256 hashing, and it maps naturally to ZFS's 256-bit block checksums. The deduplication is done inline, with ZFS assuming it's running with a multi-threaded operating system and on a server with lots of processing power. A multi-core server, in other words.

To turn it on you simply tell ZFS to dedupe a named storage pool, such as a silo, and datasets within it:

zfs set dedup=on silo

zfs set dedup=on silo/mydataset

zfs set dedup=off silo/yourdataset

With data sets containing redundant data, there's a disk-capacity benefit and a disk-write I/O benefit as redundant data isn't written to disk.

You can tell ZFS to do full byte comparisons rather than relying on the hash if you want full security against hash duplicates:

zfs set dedup=verify silo

You can go the other way and use a simpler hashing algorithm to reduce processing overhead and combine it with the verify function to increase overall dedupe speed:

zfs set dedup=fletcher4,verify silo

ZFS's deduplication scales to the size of the filesystem. Once the mapping tables are too large to fit in memory, then dedupe performance will decrease - here's a case where solid state storage might be a good idea.

The beauty of ZFS dedupe is that you don't need special storage arrays to deduplicate data. Ordinary arrays are quite acceptable, and its applicability at a data-set level means that you need only to deduplicate the datasets with redundant data and not the others.

As it is inline deduplication, throwing more processing cores and memory at it makes it go faster. We'll have to wait and see if GreenBytes switches to ZFS dedupe from the technology it's currently using. It will also be interesting to see how ZFS deduplication products compare performance-wise with specialised deduplication storage arrays. ®

Combat fraud and increase customer satisfaction

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
BOFH: Oh DO tell us what you think. *CLICK*
$%%&amp Oh dear, we've been cut *CLICK* Well hello *CLICK* You're breaking up...
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
Microsoft builds teleporter weapon to send VMware into Azure
Updated Virtual Machine Converter now converts Linux VMs too
prev story

Whitepapers

Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Top three mobile application threats
Learn about three of the top mobile application security threats facing businesses today and recommendations on how to mitigate the risk.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.