Feeds

HDS adds ROBO on-ramp to content platform

Multi-tenants can sub-let now too

Internet Security Threat Report 2014

Hitachi Data Systems has added remote office cacheing facilities to its multi-tenant Hitachi Content Platform (HCP) archive product, and allowed tenants to sub-let with individual archive policies for the sub-lettees.

HCP is HDS's archive platform, with up to 40 billion objects stored in an Ethernet cluster of ingest and retrieval nodes, offering up to 40PB of capacity. It is targeted at private and public cloud providers with each tenant (user) having their own name space and specific policies concerning retention compression, replication, etc. It uses HDS's VSP, USP, and AMS storage arrays as its storage platform with the HCP nodes providing the archive functionality layer on top of them.

Version 4.0 increases the granularity of the multi-tenant feature added to HCP last year in v3.0. A user or tenant of the HCP can now have multiple namespaces with each namespace providing storage for a component of the tenant's organisation, such as order-processing, manufacturing, sales, etc. This has been a much-requested feature from HDS's cloud and service provider customers, and is supported by extended chargeback and reporting facilities such as I/Os per namespace and total capacity consumed.

HDS has also added Hitachi Data Ingest (HDI) nodes; devices with an NFS and CIFS interface that are intended for use in remote or branch offices and hold up to 4TB of data. These link up to the ingest nodes in a central HCP installation and are closely integrated with them.

They should not be regarded as NAS heads, according to Lynn Collier, Hitachi's EMEA software and solutions director, being caches instead. Each HDI system can be a 2-node cluster and, Collier said, thousands of users can be supported on HDI, with Active Directory and LDAP integration.

When an HDI node becomes full and more data is added, then existing data is sent to the central HCP site with a stub left behind so users can still "see" the data and access it if they wish. It's pushed back out to the HDI node in that case.

An algorithm in the system keeps active data in the HDI cache for as long as it's active. There is also no need to back up an HDI system, Collier said, because of this automated transmission of data to HCP central.

The data being sent to the central HCP installation is not deduplicated. That feature is being looked at by HDS and may appear on the HCP roadmap.

HDS has an ongoing initiative to integrate third-party search, e-Discovery, legal hold and compliance applications with HCP. We understand that there may be HDS appliances built, an HDI system plus third-party software, to provide simplified access to HCP-stored data by the applications.

The HCP search function can search in a federated way, as network-attached storage (NAS) devices with a network link to HCP can have their content searched by HCP. There is a relationship with CommVault, with an HCP API providing HCP access to Simpana facilities and Simpana access to HCP-stored data. There is also a relationship with Symantec and HCP can receive streaming data input from Enterprise Vault.

HDS sees the main competition for HCP as EMC's Centera and NatApp's SnapVault. It positions HCP as a private or public cloud, highly-scalable, content storage system, with Collier saying the HDI nodes providing a cloud on-ramp in remote offices or in public cloud access points.

The HCP v4.0 and HDI products are available immediately. ®

Internet Security Threat Report 2014

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Intel offers ingenious piece of 10TB 3D NAND chippery
The race for next generation flash capacity now on
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Managing SSL certificates with ease
The lack of operational efficiencies and compliance pitfalls associated with poor SSL certificate management, and how the right SSL certificate management tool can help.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.