Feeds

Are you crying out for virtualised storage tiering?

Tiers before bedtime

  • alert
  • submit to reddit

Secure remote control for conventional and virtual desktops

You can virtualise pretty much any technology these days, so the thinking goes, and that includes storage. This means hiding what's going on behind a virtualisation layer - including tiering. But why tier?

Remember the old equation that you can have any two out of faster-cheaper-better but not all three? It's no secret that the faster your storage, the more you pay. Driven by the cost of enterprise storage and by data growth rates of around 50 percent, enterprises are adopting tiering as one leg of a resolution to the problem, two of the others being deduplication and thin provisioning.

The concept of tiering is essentially a simple one, and entails storing data on the type of storage that's most appropriate on a cost-benefit basis. In other words, the more valuable a piece of data is, the faster - and more expensive - the storage infrastructure on which it should be stored. The converse is also true.

Data migration is "a very stressful, manually-intensive task"

So instead of storing everything on one storage medium, you put data to which the fastest access is required on the fastest-performing storage system, while data for which long access times are not a problem live on the slowest, cheapest tier. In practice, this usually means that, for example, mission-critical databases live on high-speed 15k rpm SAS disks, or even SSDs, while end users' Windows shares sit on SATA disks. Long-term archives are held on tape (or MAID - massive arrays of idle disks), where it doesn't matter that access times can be measured in minutes or even hours.

The alternative is to leave things as they are, with all data on the same storage system – a single-tier configuration – which in most cases is not an option. Given today's data growth rates, it would mean simply adding more storage every couple of years and then having to re-organise it to fit the new capacity: a very expensive, disruptive and time-consuming exercise.

The question is how you get from here to there. It isn't cost-effective to migrate data manually so those vendors who implement a form of automated tiering - and that's most of them - do so with policies. Compellent was the first out of the blocks with its data progression feature, which provides policy-driven, block-level automation. This means that it detects when a piece of data has been accessed and moves it up a tier. After a while, if not accessed the data is marked as aged and can be moved down a tier.

In theory, you set the policies for how aggressive you want this process to be, while the software figures out how to do it while leaving some disk space. In practice, it's not as simple as that, as you will still want to allocate some types of data as suitable for various tiers based on business-related or other criteria rather than just access time.

Other systems work at the file or even the LUN level. Even if your storage system doesn't offer this feature, you can set up a tiering regime by adding a controller that virtualises the underlying storage, allowing you to allocate tier levels to pools of heterogeneous storage. While automating migration, this technique can't be described as a truly tiered system but it can help you move in that direction while filling a immediate need. IBM and FalconStor are among those who sell such controllers.

Storage consultant Marc Staimer, of Dragon Slayer Consulting, described data migration as "a very stressful, manually-intensive task, so tiering is only practical when it's policy-based."

So the key is to aim to automate tiering and migration as much as possible, which can involve a lot of upfront work to ensure that data is correctly categorised. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
4K-ing excellent TV is on its way ... in its own sweet time, natch
For decades Hollywood actually binned its 4K files. Doh!
Oi, Tim Cook. Apple Watch. I DARE you to tell me, IN PERSON, that it's secure
State attorney demands Apple CEO bows the knee to him
Apple's big bang: iPhone 6, ANOTHER iPhone 6 Plus and WATCH OUT
Let's >sigh< see what Cupertino has been up to for the past year
Huawei ditches new Windows Phone mobe plans, blames poor sales
Giganto mobe firm slams door shut on Microsoft. OH DEAR
Phones 4u website DIES as wounded mobe retailer struggles to stay above water
Founder blames 'ruthless network partners' for implosion
Get your Indian Landfill Android One handsets - they're only SIXTY QUID
Cheap and deafening mobes for the subcontinental masses
Apple's SNEAKY plan: COPY ANDROID. Hello iPhone 6, Watch
Sizes, prices and all – but not for the wrist-o-puter
A SCORCHIO fatboy SSD: Samsung SSD850 PRO 3D V-NAND
4Gb/s speeds on a consumer drive, anyone?
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.