Feeds

HDS private cloud storage product is 'beautiful'

KIller HCP+HDI product

Beginner's guide to SSL certificates

The second day of the Hitachi Data Systems (HDS) Bloggers event is finished and I'm writing this piece on the airplane flying back home to Italy.

HDS people joked about their unified/converged stack architecture while presenting their stack proposition, saying several times that their stack is uni-verged (unified and/or converged).

It will be a surprise to many of you but HDS has not one but two distinct blade server offerings: BladeSymphony 320 and 2000, targeting different markets. The 320 is for small environments while the 2000 is aimed at large datacentre customers.

BladeSymphony 2000

The BladeSymphony 2000 series features can be summarized in few, very interesting points:

  • Firmware-level virtualization (as close as you can get to hardware partitioning on x86)
  • Intel 5600 and 7500 CPU support
  • Four blades can be joined up to 8 CPUs with 8 cores, each acting as a single SMP system. Wow!
  • Almost linear scalability for expanded machines! (another wow!)
  • Very well balanced architecture with powerful I/O capabilities (it also has an external PCIe expander box to get more PCI slots)

The most noteworthy feature of this platform is its partitioning capability: you can partition the blades in hardware and you don't need specific drivers to work with major O/S'. Windows and Linux (RHEL and SuSE) are supported. This capability has a mainframe-like name: "LPAR" (as you probably know HDS is still proud of its mainframe roots). Or, you can use Hypervisors like VMware or Hyper-V.

But, I didn't see anything related to converged Ethernet, I/O virtualization capabilities nor management tools.

BladeSymphony 320

BladeSymphony 320 is a compact, very dense (6 rack units), 10 2-way blade slots chassis without the virtualization features mentioned above; a cheaper and simpler product. As with many other vendors you can choose among multiple blade options ( I.e. A storage blade full of hard drives).

The system is well designed with some cool features like hot swap components and automated blade failover (obviously the failover maintains all the blades in stateless form, with the chassis controller keeping track of things like MAC addresses or WWNs).

The Blade platform has centralized management (but we didn't have the chance to see it live) and built-in switches too (for both SAN and networking). This networking part is nothing to be excited about; SAN ports and Layer 3 network switches with uplink ports.

HDS has some preconfigured, pre-cabled, pre-installed and, most importantly, certified stacks with blades and midrange or high-end storage. I know nothing about services and support but HDS has an overall good service department and I can imagine that it will be prepared when these systems will hit the EU and US markets.

I'm sure that HDS can be a good player in the datacentre space with these blades but they need to work hard to improve the networking side to become a serious competitor to Cisco or, in some cases, even HP!

Private Cloud made easy

From my point of view, the greatest thing I saw in these two HDS Blogger days was the HCP (Hitachi Content Platform) coupled with HDI (Hitachi Data Ingestor). It's a killer product with which to build true and easily deployable private cloud storage.

The Ingestors are simple appliances (that can fit any pocket: ranging from a simple VM (virtual machine) to a full-featured, clustered system with local storage) acting as CIFS/NFS gateways to a central Object repository: the HCP. The architectural design is so simple that's genius!

If you already know vendors like Nasuni you can easily understand what I mean. It comes with a phenomenal advantage for the private cloud because it's a whole object-based architecture: the Ingestors manage files as objects, sync them to the central repository and act as a local cache, so it's virtually unlimited in space. You need to take care of the local cache size for performance reasons only.

The HCP maintains a copy of all the objects and it has features like dedupe (only at the object level for now) granting access to objects via HTTP and the REST API. Multi-tenancy and security are built-in at the foundation layer of the product architecture and replication options are very granular.

I repeat to myself that what I'm seeing is a "beautiful product" but still, some questions rise to mind, and the first one is: "Why aren't they selling this product like hot cakes?" This is a killer application because it's a different kind of unified storage; not blocks+files but files+objects. Many vendors are talking about cloud without even a real cloudy product in their offering. On the contrary HDS has a real one and this is the first time I have heard about it.

Probably I'm not so tight with HDS as to know all about their product line but I'm probably not alone. Many of the bloggers in the room today have never seen it before or even heard about HDI. The HDS cloud message is still not clear indeed and the risk is, as occurred in the past, that they'll perform poorly in execution. HDS has a good vision, product, engineering and architecture but it isn't communicating, or evangelizing it, to the customers in the right way.

The event wrapped up with a great speech from David Merrill on storage economics. I strongly suggest you follow his blog because he is a mind opener when the discussion moves from TCA (total cost of acquisition) to TCO (total cost of ownership. That's all I have to say. It's been a good event, a good networking opportunity and a really good way to have first-hand information straight from the horse's mouth.

*Disclaimer: HDS invited me at this event and paid for travel and accommodation but I'm not under any obligation to write any material about this event. ®

Enrico Signoretti is the CEO of Cinetica, a small consultancy firm in Italy, which offers services to medium/large companies in finance, manufacturing, and outsourcing). The company has partnerships with Oracle, Dell, VMware, Compellent and NetApp.

Remote control for virtualized desktops

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Turnbull should spare us all airline-magazine-grade cloud hype
Box-hugger is not a dirty word, Minister. Box-huggers make the cloud WORK
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Do you spend ages wasting time because of a bulging rack?
No more cloud-latency tea breaks for you, users! Get a load of THIS
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.