HP whips out blades for future
Post-modular array plots afoot
Comment HP's next-generation arrays will be based on a scale-out, virtualised storage architecture using bladed storage processors and a separate storage management software layer - oh, and industry-standard drives and components.
This is the message being put out by HP's new StorageWorks EMEA VP, Garry Veale, fresh from leaving Copan with just 42 days in the post at HP.
This HP storage way is a 3-layer deal, with the industry-standard drives and components as a base layer, storage processing blades presenting, organising and protecting their local base layer's capacity; and a storage management layer, close to the storage processor blades but separate from them, presenting and organising the system's storage facilities.
The architecture will take cues from the ExDS9100 scale-out filer storage products and LeftHand Networks' storage virtualisation capabilities. The idea is to replicate in the storage space what has happened with HP servers. There, Veale says, complex and often proprietary rack and tower servers are being replaced with a virtualised bladed server infrastructure that is more energy- and space-efficient, has greater flexibility and lowers server acquisition and running costs.
Storage functions such as data replication or deduplication could be added via software and possibly additional storage processors.
Such a bladed, scale-out, virtualised storage product could suit both small/medium business (SMB) and enterprise requirements but not necessarily the high-end data centre array requirements for bullet-proof data storage, currently met by HP's XP monolithic arrays. These, Veale thinks, like mainframes, will always be with us, because they offer a high-end level of storage service that modular or post-mdular, scale-out arrays won't be able to match.
Veale also said that such a next generation storage architecture could be used for cloud storage needs. There would still be a need though, for dedicated storage niche products such as archival storage.
Let's go further
This is as far as Gary Veale would go. We can speculate about more detail though, and we might envisage storage processors running LeftHand Networks storage virtualisation software or some derivative of it. These processors will be based on some variant of a multi-core Xeon chip. They will look after drive enclosures that will likely use SAS controllers front-ending some combination of solid state drives (SSDs), SAS performance and SATA capacity drives, possibly 2.5-inch form factors for performance and 3.5-inch for capacity.
We should be thinking of a storage processor and drive array enclosure as a subsystem or node front-ended by storage management software that could run in a separate server blade. It's probable that such nodes will be organised into a cluster - although Veale did not use the cluster word - with load-balancing and protection against node failure built in.
New nodes can be added to the cluster and their capacity automatically used. it will be likely that nodes can have I/O performance, storage processor performance, and storage capacity, in the separate SSD, SAS and SATA tiers, all scaled independently or together.
Various modes of host server access should be supportable, meaning SCSI block via Fibre Channel, and FCoE and iSCSI via Ethernet. A file interface, supporting CIFS and NFS, could be added via dedicated storage processor and software. Additional software and storage processor combinations could be used to automatically move data between storage tiers.
Using the software/storage processor idea again, we could think of geo-clusters and a distributed storage infrastructure in a cloud.
All of these things could use the same base component set although different implementations might well use different branding and be represented by HP as different products. This is HP's answer to the "Where do we go from here?" problem with here being monolithic and modular arrays, virtualised SAN storage and clustered filers. It's what EMC is developing with common components shared between its Symmetrix, Clariion and Celerra arrays, and its Atmos product.
Sponsored: Hyper-scale data management