Why storage needs Quality of Service

Makes shared storage play nicely

HP ProLiant Gen8: Integrated lifecycle automation

Extra portions

The idea is that once the total demand for storage performance has exceeded the system's ability to deliver IOPS or Mbps, then instead of granting I/O requests on a first-come first-served basis, the system ensures each server gets its minimum number of IOPS. After that, it uses any remaining headroom to deliver additional IOPS to high-priority workloads.

The incorporation of Flash is also important in making storage QoS feasible, as lots more IOPS is available for prioritisation when you cache items in Flash.

The final key element is automation, according to Reichart. "Actually setting QoS parameters is quite another matter," he says.

"Typically the metrics would be response time, but to get say sub-5ms for a database is a very complex task. You could have to play with 20 different parameters. Even then it's a moving target because once you have set up the QoS you want, another application could come in and you have to start again from scratch.

"What you want is automated or semi-automated systems that are self-optimising so the administrators can just define the requirements and let the system do the rest. It also needs more reporting and monitoring – which LUNs use which storage, which application is on which storage tier.

“The alternative is that people may even de-consolidate or look for point solutions, which is clearly inefficient and leads to over-provisioning."

That means capacity planning and modelling, with careful attention to all the performance data that your storage systems are already generating.

Start by modelling the physical workload capacity, then model the hosts; for example you can say a database server needs X number of I/Os.

Then you can begin to define monitoring policies and migrate high-performance hosts onto a high-performance storage tier or low-performance hosts onto low-performance storage.

A good technique can be to divide your primary storage into tiers, typically for high, medium and low performance. Next, you define service-level agreements for each tier: how many I/Os the storage can handle, what limit it should have on latency, what availability levels it should offer and so on.

Danger ahead

Ideally, you then want to have the system do as much as possible of the repetitive grunt-work of data movement for you – and thankfully, the nuts and bolts and tools exist pretty much to automate it all.

Matthiesen warns, however, that although some of these processes can and should be automated, the tools involved are powerful and can be dangerous. It is a case of great power coming with great responsibility – and great risk.

For example, moving a host from one tier to another can involve hundreds of megabytes and require lots of I/O, and just moving that amount of data around is going to affect your other systems.

"You have to go with caution because this is the core of your company," Matthiesen says. "You can do horrible things moving data around. It needs great care.

“Major configuration changes still need human knowledge because it takes business understanding as well as technical insight." ®

Eight steps to building an HP BladeSystem

More from The Register

next story
THUD! WD plonks down SIX TERABYTE 'consumer NAS' fatboy
Now that's a LOT of porn or pirated movies. Or, you know, other consumer stuff
EU's top data cops to meet Google, Microsoft et al over 'right to be forgotten'
Plan to hammer out 'coherent' guidelines. Good luck chaps!
US judge: YES, cops or feds so can slurp an ENTIRE Gmail account
Crooks don't have folders labelled 'drug records', opines NY beak
Manic malware Mayhem spreads through Linux, FreeBSD web servers
And how Google could cripple infection rate in a second
FLAPE – the next BIG THING in storage
Find cold data with flash, transmit it from tape
Seagate chances ARM with NAS boxes for the SOHO crowd
There's an Atom-powered offering, too
prev story


Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.