Feeds

Data Domain sticks neck out on deduping

Dopey notions from speed freaks?

Internet Security Threat Report 2014

Comment Can block-level access storage area network data be deduplicated? Data Domain thinks so, but nobody else does.

The company also reckons deduping data on solid state drives will solve the SSD price problem. Deduplication is the removal of repeated data patterns in files to drastically shrink the space they occupy; it is used particularly with backed-up data where versions of files are repeatedly stored in case they need to be recovered. By identifying duplicated and redundant patterns and replacing them with pointers to a gold or master pattern, the space needed by the backups can be reduced, sometimes to as much as a tenth or even less of the original backup data.

Such deduplication speeds backup time, and reduces the number of disks needed and the electricity required to spin them; it's good news all round, except that deduplication imposes a processing burden above and beyond what a normal drive array controller can cope with. That means that many deduplication products land incoming raw data on their disk drives first and deduplicate it after the files have come in, termed post-processing.

Data Domain relies on the most powerful Intel processors to drive its controllers, and is currently using 4-core Xeons in its top-end DD690 product. It processes the raw data as it comes it, in-line deduplication.

A two-way funnel has confined deduplication to file data. It works like this: because deduplication achieves its maximum benefit with highly-redundant data, and because this is typically backup and archive file data, then deduplication is a file-level access process.

The general view is that deduplication can't be applied to transaction-level data, the stuff requiring the fastest reading and writing on storage arrays, because deduping/rehydrating (the opposite of deduplication) takes up too much time and slows the work rate of the servers involved. The net effect is that SAN data, accessed at block level, where the transaction data is stored on tier 1 drives, is not deduped.

De-duping block-access SAN data

Enter Frank Slootman, the CEO of dedupe market leader Data Domain. He says that dedupe is a technology that should be in storage arrays, all storage arrays. EMC, a strong competitor, has the view that all Data Domain is doing is selling a storage array feature and this is wrong. It's not a stand-alone feature, but needs integrating with your information infrastructure.

Slootman agrees. Any Data Domain product is a storage array that happens to include a great dedupe engine. As a storage array it needs access interfaces and it has started out with file-level ones: NFS, CIFS, and NDMP. It's adding OST, Symantec's open storage technology API. Slootman said: "We'll announce our alliance with OST protocol later this month. We have black-boxed it - it's no longer visible."

The benefit will be faster data transfer from Symantec software into Data Domain's products.

Another new access protocol would be a block-level one. "We've researched this and proved the ability to do it," Slootman told us. "I wouldn't exclude that you would see that from us but I'm not announcing it. Backup and archive are file-oriented and that's our market today. The front-end data is transaction-based and it's block-level access. It (deduplicating it) absolutely is possible."

The well-telegraphed forthcoming new top-of-the-range product from Data Domain should arrive by mid-year, and will likely use 8-core Xeon controllers. Slootman said: "We refresh the top end of our line every year. You're going to get much bigger, much faster. The amount of throughput behind a single controller will be absolutely off the chart." He reckons that Data Domain in-line dedupe will write data faster than some post-process dedupers can land raw data on disk.

What about clustering Data Domain boxes, so that they could scale more and, in theory, offer protection against node failure? Slootman said: "The technology already exists. We've been working on it for the last two and a half years. It's much, much more complex than a single node product. It's coming out likely before the end of this (calendar) year (and will be) installed at customers' sites."

He says the fundamental problem is not how big the system gets, it's how fast they get: "We have the fastest dedupe heads in the industry by far, in-line or post-process." This company is fixated on speed.

Top 5 reasons to deploy VMware with Tegile

Next page: De-duping SSDs

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
Intel, Cisco and co reveal PLANS to keep tabs on WORLD'S MACHINES
Connecting everything to everything... Er, good idea?
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.