Feeds

Falconstor/Sun wins speediest dedupe race

So who's suffering from ingestion?

Secure remote control for conventional and virtual desktops

Comment: The fastest deduplication on the planet is performed by an 8-node Sun cluster using Falconstor deduplication software, according to a vendor-neutral comparison.

Backup expert W Curtis Preston has compared the deduplication performance of different vendors' products. He uses suppliers' own performance numbers and disregards multi-node deduplication performance if each node has its own individual index.

Preston says that a file stored on one with no previous history would not be deduplicated against the same file stored on other deduplication products in the same set of systems, because each one is blind to what the others store.

A Data Domain array is an example of an array of deduplication systems that do not share a global index. Preston says: "NetApp, Quantum, EMC & Dell, (also) have only local dedupe... Diligent, Falconstor, and Sepaton all have multi-node/global deduplication."

Nodes in a 5-node Sepaton deduplication array, for example, share a global index and the nodes co-operate to increase the deduplication ratio. In this situation a multi-node deduplication setup acts as a single, global deduplication system.

Preston compares the rated speeds for an 8-hour backup window, looking at the data ingest rate and the deduplication rate. As some vendors deduplicate inline, at data ingest time, and others deduplicate after data ingestion, known as post-process, these two numbers may well differ.

He compared deduplication speeds from EMC (Disk Library), Data Domain, FalconStor/Sun, IBM/Diligent, NetApp, Quantum/Dell and Sepaton/HP. (HP OEMs the Sepaton product.)

The Falconstor/Sun combo topped the ingest scores at 11,000MB/sec using an 8-node cluster and Fibre Channel drives. It was followed by Sepaton/HP with 3,000MB/sec and then EMC with 1,100MB/sec. Quantum/Dell ingested at 800MB/sec with deduplication deferred to post-process and not run inline.

NetApp was the slowest, ingesting data at 600MB/sec. The configuration was a 2-node one but each node deduplicated data on its own. Quantum/Dell would ingest at 500MB/sec if deduplication was inline

The fastest deduplication engine was the Falconstor/Sun one, rated at 3,200MB/sec. It was followed by Sepaton/HP at 1,500MB/sec, then by IBM/Diligent at 900MB/sec, Data Domain at 750MB/sec with EMC trailing at 400MB/sec. Preston couldn't find any NetApp deduplication speed numbers.

Preston also looked at the numbers for a 12-hour backup window. If vendors have an ingest rate that is more than twice their deduplication rate, they would need more than 24 hours to ingest and then deduplicate 12 hours worth of ingested data. This means their effective ingest rate for a 12-hour backup run can only be twice their deduplication rate.

He also has a discussion of restore speeds for deduplicated data, known as inflation or rehydration. The sources for his numbers and the products used are listed on his blog.

This is the first comprehensive and vendor-neutral deduplication speed comparison, and is well worth a look. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.