How to break out of the storage hardware zoo
In search of scalability
With enterprise storage now costing significantly more to run than it does to buy according to many estimates, the need to cut those operational costs – or at least to slow their rise – is paramount.
The result is a growing desire to move away from the siloed and application-specific nature of much of today's storage market.
"Most storage vendors have completely different system architectures in different ranges," says Frank Reichart, Fujitsu's senior director product marketing storage.
"It has to do with their histories and the way they operate – often it derives from acquisitions and mergers. But the end result is the same: a hardware zoo."
Of course that can be a pain from the acquisition perspective. Only a few storage developers – Fujitsu and NetApp are probably the best-known ones – manage to have good end-to-end software and hardware compatibility across their product ranges.
If a vendor does not have broad upgradeability, then once your applications grow beyond the capabilities of its current storage platform, that could mean a fork-lift upgrade. And enterprise storage needs will inevitably change over time, especially when applications become successful and win more users.
That in turn means the unwelcome requirement to migrate your data and applications to an entire new storage architecture.
It is also likely to cost you more to start with, claims Reichart. He says that pretty much everything except the controller board is standard across the Fujitsu range, including the management software.
“If you know you can't upgrade, you buy a much bigger storage subsystem than you actually need, just in case your applications grow more than expected,” he says.
Nothing succeeds like excess
Alex D'Anna, the director solutions consulting EMEA at storage performance specialist Virtual Instruments, agrees.
“Customers typically ask their vendor for advice on sizing. The vendor wants the customers to be happy, so they over-provision,” he says.
The hardware zoo is an even bigger pain if each of those platforms must also be independently managed, provisioned and monitored via its own idiosyncratic management interface.
As IDC Europe's storage analysts wrote in a white paper considering storage for the next decade: "Ease of use, including easy deployment, is paramount, as well as policy-driven automation capabilities and a reduction in the number of storage systems management points, either by fewer storage arrays or through multi-systems management.
"Checklist items include an easy to understand and productive unified management interface preferably for all block and file storage protocols, effective caching and automated tiering of data, group policy settings, as well as advanced monitoring and reporting capabilities."
Each of those different storage system architectures was developed for a reason, though. Enterprise storage needs to span a wide range of applications, from small and affordable through mid-range access and capacity to high-end capacity and performance, and each has different priorities and needs.
The various product ranges developed to cater for those different needs are often incompatible with each other
This can lead to all sorts of challenges for buyers, not least because the various product ranges developed to cater for those different needs are often incompatible with each other.
In addition, each product range is carefully packaged and positioned, not only to meet the needs of its target market but also to differentiate it from its siblings – and perhaps also to make it less likely that customers will trade down to save money.
It causes problems too when implementing disaster recovery. Most replication schemes require the same storage hardware at each end of the link, leading many companies to expensively over-provision their disaster recovery sites.
In a similar vein, application-specific storage remains ideal for focused tasks where only the best niche technology will do. When you want absolutely the fastest video server or the deepest archive, point products are likely to be your first port of call.
That might be fine for smaller niche companies, but increasingly it is not how the wider world of data storage operates. As storage volumes rise relentlessly so does the cost of managing that storage, and the cost of management is compounded every time you add a new species of storage subsystem.
All of this is therefore driving growing interest in the idea of fully scalable storage which has uniform management tools and even re-usable components, and which can handle all of an enterprise's storage requirements.
The aim is to build a fully shared and converged storage estate that a business can use – and more importantly manage – more efficiently and effectively.
"You need to have all the tools in your toolbox now," says Kevin Brown, CEO of Ethernet-based storage developer Coraid.
"In the past everything ran in different silos: file storage, video servers, backup servers, archiving and so on. The vendors wanted to sell you five or six arrays, all running on different wires with different software."
He says this is driving the development of enhanced storage subsystems, equipped with Flash memory caching for higher performance but also flexible enough to provide the right storage characteristics for each job.
Sometimes that means high performance, sometimes it is low cost and high capacity, and at other times it is something in between – the key is that it is all from a single box with a single point of management.
Made to measure
“So you use best practices and allow the customer to tailor storage profiles for various applications,” Brown says.
“For example, synchronous applications such as financial services versus asynchronous applications such as video surveillance.”
He adds that even with converged storage hardware, storage virtualisation is pretty much essential.
"The compute world used to look siloed, so its cost of management was very high. Virtualisation has been great for compute, but has actually made things worse for storage," he says.
“It's an exponentially harder management problem and we have to have a much more virtualised storage environment to match it."
An IDC survey of IT decision makers, influencers and storage administrators found "significant interest in converged infrastructure, primarily driven by the possible savings on management time, but also as a way to reduce capital investment in IT”.
It adds: “[A]bout four in ten claimed interest in converged solutions if it helped in lowering IT costs by making management easier and more efficient. The importance of improving storage cost structure scores highly across the board, regardless of company size, while ease of management is even more critical for smaller companies as they usually don't have a dedicated storage specialist, but rather a small team of IT generalists.”
Converting that general interest in converged solutions into genuine sales and customer adoption is a different story, of course. For a start, storage developers wanting to reduce that number of storage management points face a big technical challenge.
Instead of developing specialist subsystems, they must try to converge as many usage cases as possible onto a single architecture, which means they risk being a Jack of all trades.
And for buyers, there needs to be visible financial benefit, warns Bob Plumridge, chairman of the storage industry association, SNIA Europe.
"Consolidating to fewer boxes is one way to go, especially as the differences between hardware platforms are probably narrower now than they have ever been," he says.
"But migrating data is expensive and it carries risks, so you need to see real cost benefits from it. You need a good business case."
In addition, there are alternative ways to reduce the management load, or at least the number of points of management. Third-party management tools can help you manage a broad storage estate, for example, although you may still need to switch to the native toolsets for advanced operations. Other potential solutions are virtualisation and outsourcing.
"The practical reality for most reasonably sized organisations is that they don't buy all their storage from one vendor, and probably won't do so for a good few years to come," says Plumridge.
"For some it's due to mergers and acquisitions, others want multiple suppliers to negotiate better deals. But yes, that can leave a management headache."
He adds that SNIA's standard for interoperable storage management, called SMI-S, "is still being enhanced and developed, and has been reasonably successful at standardising and dealing with the basic functions”.
He acknowledges, though, that standards tend to lag behind vendors' product development, so they often provide good coverage only at the lowest common denominator level.
"Another way to address the issue is to virtualise the storage, whether through a virtualisation engine – such as EMC VPlex, Hitachi's VSP or IBM SVC – or something like DataCore, then you manage the virtual layer," he says.
"And then we are finally starting to see the outsourcing of storage management infrastructures, where the customer has the storage arrays on-premise but a vendor or specialist remotely manages them."
The advantage here is that third-party storage specialists can afford the specific technology skills to manage all those different storage platforms, of course. ®