HDS CTO: Man, I could just throttle our array... er, in a good way
Fat NAS box does dedupe, when it's got a sec
Hitachi Data Systems has bunged primary deduplication into its network-attached storage (HNAS) kit and Hitachi Unified Storage (HUS) mid-range array. That's according to the company's chief technology officer Hu Yoshida.
HNAS is the hardware-accelerated filer HDS obtained when it bought BlueArc; the system relies on programmable chips (FPGAs) to speed up its operation. The HUS array file controllers use the BlueArc hardware engine and software. Yoshida said the deduplication feature can:
- Be automated.
- Deduplicate data in place rather than just incoming bytes.
- Throttle back if the file-serving workload passes a threshold.
- Use a cryptographic hash algorithm to ensure data integrity.
- Dedupe the entire usable capacity of a filer - between 4PB and 8PB depending on the HNAS model - 256 terabytes at a time.
There is "an intelligent deduplication process that knows when new data is added and automatically starts up the deduplication engine(s) as long as the system is not busy", the CTO wrote on his company website. The system throttles back the dedupe if it's too busy responding to file read and write requests. The dedupe process looks at stored files and uses a database of hashes to identify chunks of data that are duplicated. They are then deleted and the reclaimed space made available for other data.
The dedupe has its hashing and chunking accelerated in the FPGA hardware, rather than pure software on the general-purpose CPU. Yoshida blogged this week: "A base hashing/chunking engine licence is included free of charge. Three additional hashing/chunking engines can be licensed; the increase in dedupe performance is nearly four-fold [with four engines]."
We interpret this as meaning that HUS files and objects can be deduped, but not HUS blocks.
Yoshida said one HDS customer deduped 1.2 million files in 16 minutes, but didn't reveal the net capacity benefit. Dedupe efficiency is said to be "comparable to other dedupe algorithms" and "the efficiency of dedupe depends on the dataset and file system block sizes". Quite so. The dedupe feature was introduced in January, we're told. ®
Sponsored: Optimizing the hybrid cloud