This article is more than 1 year old

Data Domain doing faster dedupe

Doubling up and up

Data Domain has increased the speed of its deduplication to between 50 and 100 percent through a software update.

The speed boost varies from 50 per cent on DD510, 530, and 580 models, 58 per cent on the DD565, up to 90 per cent on the DD690g and DDX array, and 100 per cent on the DD120.

The DD690g had a 1.4TB/hour throughput rating when it was introduced in May last year. Now it is 2.7TB/hour (with Symantec NetBackup OpenStorage and a 10GbitE connection).

It makes you wonder whether the code was that bad; clearly there was room for improvement. What Data Domain has done is to get its deduplication code executing better on multi-core CPUs so that more is done in parallel. The code, using a technology Data Domain calls Stream Informed Segment Layout (SISL), has been tuned so that it makes better use of the available cores.

Data Domain's platform operating software has revved from DD OS v4.5 to 4.6. Shane Jackson, senior director for product and channel marketing at Data Domain, contrasts competing vendors who, he says, rely on adding disk spindles to boost deduplication speed, with Data Domain's reliance on CPU speed. "Intel shows up with a faster processor more often than Seagate shows up with a faster drive," he says.

That's over-egging the pudding, as all dedupe vendors rely on software and disks. Data Domain happens to have, it appears, as good as if not better software algorithms than most, certainly enough for it to suggest its products can be used to deduplicate some primary storage applications.

The neat aspect of this is that it's widely expected to introduce new, Nehelem-boosted hardware later this year. With 8 cores available there should be another doubling or near-doubling of performance compared to the current quad core Xeons being used. That means a DD690-type product could ramp its performance up to 5.4TB/hour, meaning 90GB/min or 1.5GB/sec.

Sepaton and Diligent, now owned by IBM, emphasise their deduping speed and NetApp also pushes its ASIS dedupe into some primary data deduplication applications.

Two thoughts: first, it looks as if a deduping race is on. Secondly, it begins to look as if inline deduplication is quite viable for the majority of backup applications.

Offline dedupe vendors say that, to keep backup speeds high you really should land the backup data uninterrupted by any processing and dedupe it afterwards. At speeds of up to 750MB/sec now and with 1.5GB/sec speeds coming, Data Domain would say that most backup applications could be deduped inline and avoid the need for a substantial chunk of disk capacity kept aside to land the raw data. ®

More about

TIP US OFF

Send us news


Other stories you might like