Feeds

Are you crying out for virtualised storage tiering?

Tiers before bedtime

  • alert
  • submit to reddit

SANS - Survey on application security programs

You can virtualise pretty much any technology these days, so the thinking goes, and that includes storage. This means hiding what's going on behind a virtualisation layer - including tiering. But why tier?

Remember the old equation that you can have any two out of faster-cheaper-better but not all three? It's no secret that the faster your storage, the more you pay. Driven by the cost of enterprise storage and by data growth rates of around 50 percent, enterprises are adopting tiering as one leg of a resolution to the problem, two of the others being deduplication and thin provisioning.

The concept of tiering is essentially a simple one, and entails storing data on the type of storage that's most appropriate on a cost-benefit basis. In other words, the more valuable a piece of data is, the faster - and more expensive - the storage infrastructure on which it should be stored. The converse is also true.

Data migration is "a very stressful, manually-intensive task"

So instead of storing everything on one storage medium, you put data to which the fastest access is required on the fastest-performing storage system, while data for which long access times are not a problem live on the slowest, cheapest tier. In practice, this usually means that, for example, mission-critical databases live on high-speed 15k rpm SAS disks, or even SSDs, while end users' Windows shares sit on SATA disks. Long-term archives are held on tape (or MAID - massive arrays of idle disks), where it doesn't matter that access times can be measured in minutes or even hours.

The alternative is to leave things as they are, with all data on the same storage system – a single-tier configuration – which in most cases is not an option. Given today's data growth rates, it would mean simply adding more storage every couple of years and then having to re-organise it to fit the new capacity: a very expensive, disruptive and time-consuming exercise.

The question is how you get from here to there. It isn't cost-effective to migrate data manually so those vendors who implement a form of automated tiering - and that's most of them - do so with policies. Compellent was the first out of the blocks with its data progression feature, which provides policy-driven, block-level automation. This means that it detects when a piece of data has been accessed and moves it up a tier. After a while, if not accessed the data is marked as aged and can be moved down a tier.

In theory, you set the policies for how aggressive you want this process to be, while the software figures out how to do it while leaving some disk space. In practice, it's not as simple as that, as you will still want to allocate some types of data as suitable for various tiers based on business-related or other criteria rather than just access time.

Other systems work at the file or even the LUN level. Even if your storage system doesn't offer this feature, you can set up a tiering regime by adding a controller that virtualises the underlying storage, allowing you to allocate tier levels to pools of heterogeneous storage. While automating migration, this technique can't be described as a truly tiered system but it can help you move in that direction while filling a immediate need. IBM and FalconStor are among those who sell such controllers.

Storage consultant Marc Staimer, of Dragon Slayer Consulting, described data migration as "a very stressful, manually-intensive task, so tiering is only practical when it's policy-based."

So the key is to aim to automate tiering and migration as much as possible, which can involve a lot of upfront work to ensure that data is correctly categorised. ®

Combat fraud and increase customer satisfaction

More from The Register

next story
Reg man builds smart home rig, gains SUPREME CONTROL of DOMAIN – Pics
LightwaveRF and Arduino: Bright ideas for dim DIYers
Leaked pics show EMBIGGENED iPhone 6 screen
Fat-fingered fanbois rejoice over Chinternet snaps
Apple patent LOCKS drivers out of their OWN PHONES
I'm sorry Dave, I'm afraid I can't let you text that
Microsoft signs Motorola to Android patent pact – no, not THAT Motorola
The part that Google never got will play ball with Redmond
Slip your finger in this ring and unlock your backdoor, phone, etc
Take a look at this new NFC jewellery – why, what were you thinking of?
Happy 25th birthday, Game Boy!
Monochrome handset ushered in modern mobile gaming era
Rounded corners? Pah! Amazon's '3D phone has eye-tracking tech'
Now THAT'S what we call a proper new feature
US mobile firms cave on kill switch, agree to install anti-theft code
Slow and kludgy rollout will protect corporate profits
prev story

Whitepapers

Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mainstay ROI - Does application security pay?
In this whitepaper learn how you and your enterprise might benefit from better software security.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.