Feeds

How to break out of the storage hardware zoo

In search of scalability

Next gen security for virtualised datacentres

With enterprise storage now costing significantly more to run than it does to buy according to many estimates, the need to cut those operational costs – or at least to slow their rise – is paramount.

The result is a growing desire to move away from the siloed and application-specific nature of much of today's storage market.

"Most storage vendors have completely different system architectures in different ranges," says Frank Reichart, Fujitsu's senior director product marketing storage.

"It has to do with their histories and the way they operate – often it derives from acquisitions and mergers. But the end result is the same: a hardware zoo."

Of course that can be a pain from the acquisition perspective. Only a few storage developers – Fujitsu and NetApp are probably the best-known ones – manage to have good end-to-end software and hardware compatibility across their product ranges.

Heavy lifting

If a vendor does not have broad upgradeability, then once your applications grow beyond the capabilities of its current storage platform, that could mean a fork-lift upgrade. And enterprise storage needs will inevitably change over time, especially when applications become successful and win more users.

That in turn means the unwelcome requirement to migrate your data and applications to an entire new storage architecture.

It is also likely to cost you more to start with, claims Reichart. He says that pretty much everything except the controller board is standard across the Fujitsu range, including the management software.

“If you know you can't upgrade, you buy a much bigger storage subsystem than you actually need, just in case your applications grow more than expected,” he says.

Nothing succeeds like excess

Alex D'Anna, the director solutions consulting EMEA at storage performance specialist Virtual Instruments, agrees.

“Customers typically ask their vendor for advice on sizing. The vendor wants the customers to be happy, so they over-provision,” he says.

The hardware zoo is an even bigger pain if each of those platforms must also be independently managed, provisioned and monitored via its own idiosyncratic management interface.

As IDC Europe's storage analysts wrote in a white paper considering storage for the next decade: "Ease of use, including easy deployment, is paramount, as well as policy-driven automation capabilities and a reduction in the number of storage systems management points, either by fewer storage arrays or through multi-systems management.

"Checklist items include an easy to understand and productive unified management interface preferably for all block and file storage protocols, effective caching and automated tiering of data, group policy settings, as well as advanced monitoring and reporting capabilities."

Each of those different storage system architectures was developed for a reason, though. Enterprise storage needs to span a wide range of applications, from small and affordable through mid-range access and capacity to high-end capacity and performance, and each has different priorities and needs.

The various product ranges developed to cater for those different needs are often incompatible with each other

This can lead to all sorts of challenges for buyers, not least because the various product ranges developed to cater for those different needs are often incompatible with each other.

In addition, each product range is carefully packaged and positioned, not only to meet the needs of its target market but also to differentiate it from its siblings – and perhaps also to make it less likely that customers will trade down to save money.

It causes problems too when implementing disaster recovery. Most replication schemes require the same storage hardware at each end of the link, leading many companies to expensively over-provision their disaster recovery sites.

In a similar vein, application-specific storage remains ideal for focused tasks where only the best niche technology will do. When you want absolutely the fastest video server or the deepest archive, point products are likely to be your first port of call.

Secure remote control for conventional and virtual desktops

Next page: Expanding universe

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?