This article is more than 1 year old

Hitachi keen to be cloud content king

Fluff as far as the eye can see

Comment Hitachi Data Systems is planning to provide a sophisticated content storage cloud infrastructure with end-to-end deduplication and local file servers accessing a content core having both archive and data discovery functions.

According to information the Reg has received, HDS has a concept of a content core, composed of the Hitachi Content Platform (HCP) and the Hitachi Data Discovery Suite (HDDS). This resides in a primary data centre, and its content can be replicated to a second datacentre for business continuity and disaster recovery purposes.

Users are concerned with read and write files, while the HCP and HDDS talk objects. The main data input product or edge device is the HDI, Hitachi Data Ingestor which talks NFS and CIFS files to its client systems but objects to the HCP. The HCP is, conceptually, servers running software acquired with Archivas and Hitachi VSP, USP or AMS back-end storage.

HDI

The HDI, on an X86 server base, has a cache to hold content that is frequently accessed, and migrates fresh content to the HCP while also providing CIFS/NFS access to that content. This year HDS is extending the HDI platforms to include its own Blade Symphony servers, Microsoft's Hyper-V, KDE and XEN, Amazon's EC2 and S3, and HDS partners' own cloud stores.

HDI users will be able to do their own file restores and HDI will grow from being able to handle 100 million files per filesystem, with four filesystems per HDI system, to supporting four billion files. There will be co-ordinated management of a set or "farm" of HDI systems and read-only geographic dispersion.

In 2012 HDI will be developed again to support IPV6 and run on HDS' Unified Compute Platform, the Hitachi Compute Blade 2000 and Compute Blade 320, with logical partitions (LPAR) technology. The HDI will have its tiering and caching tuned for HDS' ISV partners and have global edge-to-core reduplication.

We understand this is block-level reduplication; only file-level reduplication being available now, and speculate that it will be an implementation of Permabit's Albireo technology. There will also be both read and write geographic edge dispersion.

Content core access devices

We understand that there are more supported edge access devices that will talk to the HCP+HDDS core.

Applications such as NetBackup and CommVault's Simpana can access the core systems. There will be a new form of the HDI called STAR/HDI+STARbuck, used in both ordinary offices and in remote offices. We have no idea what STAR and STARbuck are, speculating that STAR could be an acronym because of its capitalisation. STARbuck is probably a little STAR.

The Hitachi NAS system (HNAS), based on BlueArc technology, will also access the core as will a mysterious box called ShiChi. We understand that Shichi Hiroyasu is a prolific inventor and patent holder at Hitachi. That may be meaningful in the content core context or it may not.

We asked HDS about these things and were told by our spokesperson: "I can't comment on rumours and speculation on any unannounced products/technologies."

It is our understanding that a customer's private content cloud core, the HCP/HDDS combo, can link to a public cloud HCP/HDDS combo and so provide a hybrid content cloud.

Altogether this looks like a strong and coherent vision for content storage services in the cloud from HDS and Hitachi. The HDI product line looks to be growing into a range of products, with Hitachi not exposing native object storage facilities and access methods to end users, instead using a file abstraction layer.

There is probably going to be HCP development as well; we're just not hearing about it. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like