Original URL: https://www.theregister.com/2009/02/09/data_domain_dedupe_sans/

Data Domain sticks neck out on deduping

Dopey notions from speed freaks?

By Chris Mellor

Posted in Channel, 9th February 2009 13:08 GMT

Comment Can block-level access storage area network data be deduplicated? Data Domain thinks so, but nobody else does.

The company also reckons deduping data on solid state drives will solve the SSD price problem. Deduplication is the removal of repeated data patterns in files to drastically shrink the space they occupy; it is used particularly with backed-up data where versions of files are repeatedly stored in case they need to be recovered. By identifying duplicated and redundant patterns and replacing them with pointers to a gold or master pattern, the space needed by the backups can be reduced, sometimes to as much as a tenth or even less of the original backup data.

Such deduplication speeds backup time, and reduces the number of disks needed and the electricity required to spin them; it's good news all round, except that deduplication imposes a processing burden above and beyond what a normal drive array controller can cope with. That means that many deduplication products land incoming raw data on their disk drives first and deduplicate it after the files have come in, termed post-processing.

Data Domain relies on the most powerful Intel processors to drive its controllers, and is currently using 4-core Xeons in its top-end DD690 product. It processes the raw data as it comes it, in-line deduplication.

A two-way funnel has confined deduplication to file data. It works like this: because deduplication achieves its maximum benefit with highly-redundant data, and because this is typically backup and archive file data, then deduplication is a file-level access process.

The general view is that deduplication can't be applied to transaction-level data, the stuff requiring the fastest reading and writing on storage arrays, because deduping/rehydrating (the opposite of deduplication) takes up too much time and slows the work rate of the servers involved. The net effect is that SAN data, accessed at block level, where the transaction data is stored on tier 1 drives, is not deduped.

De-duping block-access SAN data

Enter Frank Slootman, the CEO of dedupe market leader Data Domain. He says that dedupe is a technology that should be in storage arrays, all storage arrays. EMC, a strong competitor, has the view that all Data Domain is doing is selling a storage array feature and this is wrong. It's not a stand-alone feature, but needs integrating with your information infrastructure.

Slootman agrees. Any Data Domain product is a storage array that happens to include a great dedupe engine. As a storage array it needs access interfaces and it has started out with file-level ones: NFS, CIFS, and NDMP. It's adding OST, Symantec's open storage technology API. Slootman said: "We'll announce our alliance with OST protocol later this month. We have black-boxed it - it's no longer visible."

The benefit will be faster data transfer from Symantec software into Data Domain's products.

Another new access protocol would be a block-level one. "We've researched this and proved the ability to do it," Slootman told us. "I wouldn't exclude that you would see that from us but I'm not announcing it. Backup and archive are file-oriented and that's our market today. The front-end data is transaction-based and it's block-level access. It (deduplicating it) absolutely is possible."

The well-telegraphed forthcoming new top-of-the-range product from Data Domain should arrive by mid-year, and will likely use 8-core Xeon controllers. Slootman said: "We refresh the top end of our line every year. You're going to get much bigger, much faster. The amount of throughput behind a single controller will be absolutely off the chart." He reckons that Data Domain in-line dedupe will write data faster than some post-process dedupers can land raw data on disk.

What about clustering Data Domain boxes, so that they could scale more and, in theory, offer protection against node failure? Slootman said: "The technology already exists. We've been working on it for the last two and a half years. It's much, much more complex than a single node product. It's coming out likely before the end of this (calendar) year (and will be) installed at customers' sites."

He says the fundamental problem is not how big the system gets, it's how fast they get: "We have the fastest dedupe heads in the industry by far, in-line or post-process." This company is fixated on speed.

De-duping SSDs

Solid state storage represents a great big opportunity to Slootman. "We're very, very interested in that. It's not electro-mechanical and it's blazing fast, but it's got economic problems. Deduped disk is destroying the tape market. Deduped SSD will affect the Fibre Channel disk market, giving SSDs the economics for the mainstream market place. We think dedupe will become a huge enabler for SSDs."

Fast transaction data deduping was specifically linked by him to virtualised servers and their virtual machine images where there is a lot of data redundancy.

When might this happen, this combination of deduping and solid state storage in Data Domain arrays? By the end of 2010? "It's not out of the realm of possibility."

He'll have half an eye on NetApp which is going to introduce SSD technology into its arrays, and which reckons WAFL technology inside its ONTAP array software is already well suited to writing data to flash memory. NetApp also has its ASIS dedupe technology shipping with every ONTAP array it builds and has started suggesting it's OK to use it for light transaction data use. Here are building blocks NetApp could use for a Data Domain catch-up effort.

Data Domain builds storage arrays and if the company manages to position them as general storage arrays, not just for backup and archive use, then they will be judged and compared as storage arrays against competing product from Dell, EMC, HDS, HP, NetApp, Sun and others. So Slootman has to ensure Data Domain builds out its software environment so the company's products can compete in the general storage array market. While he's doing that, though, the company is not going to stop pressing the gas pedal to the floor and accelerating its arrays' storage operations as fast as possible.

While talking the general storage array talk, Data Domain will also be walking the accelerated dedupe array walk, with its products positioned now as fast and affordable replacements for tape libraries and, potentially in the future, for fast and affordable deduped, HDD and SSD-featured arrays for transaction data storage at block level or, we guess, file-level. The company has to use the profits it's generating now to build products encapsulating a strategy that will enable this one-trick pony to broaden its offerings and widen its trick portfolio. Speed is the key - and that means an opportunity for its competitors.

If they can add similar 8-core Xeon controllers and rewrite their array software to match Data Domain's speed, then Slootman's edge withers away. It really is a race dependent on speed. Can Data Domain use its current speed advantage to earn the revenues needed to bulk out its offer, broaden its product range, and get a solid place in the market, before Slootman's competitors add the processing power they need to their deduping arrays and stop Data Domain in its tracks?

Gentlemen, rev your engines... ®