Feeds

Data Domain sticks neck out on deduping

Dopey notions from speed freaks?

The essential guide to IT transformation

Comment Can block-level access storage area network data be deduplicated? Data Domain thinks so, but nobody else does.

The company also reckons deduping data on solid state drives will solve the SSD price problem. Deduplication is the removal of repeated data patterns in files to drastically shrink the space they occupy; it is used particularly with backed-up data where versions of files are repeatedly stored in case they need to be recovered. By identifying duplicated and redundant patterns and replacing them with pointers to a gold or master pattern, the space needed by the backups can be reduced, sometimes to as much as a tenth or even less of the original backup data.

Such deduplication speeds backup time, and reduces the number of disks needed and the electricity required to spin them; it's good news all round, except that deduplication imposes a processing burden above and beyond what a normal drive array controller can cope with. That means that many deduplication products land incoming raw data on their disk drives first and deduplicate it after the files have come in, termed post-processing.

Data Domain relies on the most powerful Intel processors to drive its controllers, and is currently using 4-core Xeons in its top-end DD690 product. It processes the raw data as it comes it, in-line deduplication.

A two-way funnel has confined deduplication to file data. It works like this: because deduplication achieves its maximum benefit with highly-redundant data, and because this is typically backup and archive file data, then deduplication is a file-level access process.

The general view is that deduplication can't be applied to transaction-level data, the stuff requiring the fastest reading and writing on storage arrays, because deduping/rehydrating (the opposite of deduplication) takes up too much time and slows the work rate of the servers involved. The net effect is that SAN data, accessed at block level, where the transaction data is stored on tier 1 drives, is not deduped.

De-duping block-access SAN data

Enter Frank Slootman, the CEO of dedupe market leader Data Domain. He says that dedupe is a technology that should be in storage arrays, all storage arrays. EMC, a strong competitor, has the view that all Data Domain is doing is selling a storage array feature and this is wrong. It's not a stand-alone feature, but needs integrating with your information infrastructure.

Slootman agrees. Any Data Domain product is a storage array that happens to include a great dedupe engine. As a storage array it needs access interfaces and it has started out with file-level ones: NFS, CIFS, and NDMP. It's adding OST, Symantec's open storage technology API. Slootman said: "We'll announce our alliance with OST protocol later this month. We have black-boxed it - it's no longer visible."

The benefit will be faster data transfer from Symantec software into Data Domain's products.

Another new access protocol would be a block-level one. "We've researched this and proved the ability to do it," Slootman told us. "I wouldn't exclude that you would see that from us but I'm not announcing it. Backup and archive are file-oriented and that's our market today. The front-end data is transaction-based and it's block-level access. It (deduplicating it) absolutely is possible."

The well-telegraphed forthcoming new top-of-the-range product from Data Domain should arrive by mid-year, and will likely use 8-core Xeon controllers. Slootman said: "We refresh the top end of our line every year. You're going to get much bigger, much faster. The amount of throughput behind a single controller will be absolutely off the chart." He reckons that Data Domain in-line dedupe will write data faster than some post-process dedupers can land raw data on disk.

What about clustering Data Domain boxes, so that they could scale more and, in theory, offer protection against node failure? Slootman said: "The technology already exists. We've been working on it for the last two and a half years. It's much, much more complex than a single node product. It's coming out likely before the end of this (calendar) year (and will be) installed at customers' sites."

He says the fundamental problem is not how big the system gets, it's how fast they get: "We have the fastest dedupe heads in the industry by far, in-line or post-process." This company is fixated on speed.

Boost IT visibility and business value

Next page: De-duping SSDs

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.