Feeds

HP whips out blades for future

Post-modular array plots afoot

Combat fraud and increase customer satisfaction

Comment HP's next-generation arrays will be based on a scale-out, virtualised storage architecture using bladed storage processors and a separate storage management software layer - oh, and industry-standard drives and components.

This is the message being put out by HP's new StorageWorks EMEA VP, Garry Veale, fresh from leaving Copan with just 42 days in the post at HP.

This HP storage way is a 3-layer deal, with the industry-standard drives and components as a base layer, storage processing blades presenting, organising and protecting their local base layer's capacity; and a storage management layer, close to the storage processor blades but separate from them, presenting and organising the system's storage facilities.

The architecture will take cues from the ExDS9100 scale-out filer storage products and LeftHand Networks' storage virtualisation capabilities. The idea is to replicate in the storage space what has happened with HP servers. There, Veale says, complex and often proprietary rack and tower servers are being replaced with a virtualised bladed server infrastructure that is more energy- and space-efficient, has greater flexibility and lowers server acquisition and running costs.

Storage functions such as data replication or deduplication could be added via software and possibly additional storage processors.

Such a bladed, scale-out, virtualised storage product could suit both small/medium business (SMB) and enterprise requirements but not necessarily the high-end data centre array requirements for bullet-proof data storage, currently met by HP's XP monolithic arrays. These, Veale thinks, like mainframes, will always be with us, because they offer a high-end level of storage service that modular or post-mdular, scale-out arrays won't be able to match.

Veale also said that such a next generation storage architecture could be used for cloud storage needs. There would still be a need though, for dedicated storage niche products such as archival storage.

Let's go further

This is as far as Gary Veale would go. We can speculate about more detail though, and we might envisage storage processors running LeftHand Networks storage virtualisation software or some derivative of it. These processors will be based on some variant of a multi-core Xeon chip. They will look after drive enclosures that will likely use SAS controllers front-ending some combination of solid state drives (SSDs), SAS performance and SATA capacity drives, possibly 2.5-inch form factors for performance and 3.5-inch for capacity.

We should be thinking of a storage processor and drive array enclosure as a subsystem or node front-ended by storage management software that could run in a separate server blade. It's probable that such nodes will be organised into a cluster - although Veale did not use the cluster word - with load-balancing and protection against node failure built in.

New nodes can be added to the cluster and their capacity automatically used. it will be likely that nodes can have I/O performance, storage processor performance, and storage capacity, in the separate SSD, SAS and SATA tiers, all scaled independently or together.

Various modes of host server access should be supportable, meaning SCSI block via Fibre Channel, and FCoE and iSCSI via Ethernet. A file interface, supporting CIFS and NFS, could be added via dedicated storage processor and software. Additional software and storage processor combinations could be used to automatically move data between storage tiers.

Using the software/storage processor idea again, we could think of geo-clusters and a distributed storage infrastructure in a cloud.

All of these things could use the same base component set although different implementations might well use different branding and be represented by HP as different products. This is HP's answer to the "Where do we go from here?" problem with here being monolithic and modular arrays, virtualised SAN storage and clustered filers. It's what EMC is developing with common components shared between its Symmetrix, Clariion and Celerra arrays, and its Atmos product.

3 Big data security analytics techniques

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Microsoft's Nadella: SQL Server 2014 means we're all about data
Adds new big data tools in quest for 'ambient intelligence'
BOFH: Oh DO tell us what you think. *CLICK*
$%%&amp Oh dear, we've been cut *CLICK* Well hello *CLICK* You're breaking up...
Inside the Hekaton: SQL Server 2014's database engine deconstructed
Nadella's database sqares the circle of cheap memory vs speed
prev story

Whitepapers

Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.