This article is more than 1 year old

Microsoft's Azure will bring tiers to your eyes

The joy of automated backup

For those of us who have been in IT for a while the term hierarchical storage management (HSM) has become more than a little old fashioned.

HSM was coined back in the bad old mainframe days when a high-capacity 300MB disk cost tens of thousands of dollars. At that time it made sense to have a primary array of disks backed up by a much slower, but far cheaper, tape library.

There are certainly some instances today where running an HSM system would appear to still make financial sense. Organisations such as the CSIRO in Australia, for example, are still heavily dependent on HSM.

Now cloud providers are coming of age and 200TB of consumer-grade disk can be had for as little as £20,000 (for the disks alone).

Heavenly pair

If you had asked me six months ago if running a traditional HSM system was a good idea my answer would have been a resounding “sometimes”. My answer today, however, would be "absolutely not."

Although Microsoft’s new Azure "on-premises" will one day soon allow for the creation of Azure and Hyper-V-compatible hosted clouds to be stood up anywhere in the world, that day has not yet arrived.

What we do have is Hyper-V and its near Azure-like ability to create local clouds. Combine this with Windows Server 2012 R2 and what you have is a pairing made in heaven.

Given then that we can set up our own private cloud to rival a Microsoft-hosted instance of Azure, does the question change from "should we run our own private cloud option?" to "why shouldn’t we"?

Let’s have a look.

Hot and cold

The first thing we should really do is kill off the concept, and the inherent baggage that the term brings with it, of HSM. It has had its day and deserves to fade into the sunset with a planned obsolescence.

In its place I’d like to suggest we replace it with a term that we should starting to be more than a little familiar with: automated storage tiering (AST).

This is really just a way to describe the different types of storage we are running in our data centres. Anyone familiar with virtualised data centres should have a working familiarity with the concept.

The usual breakdown of AST is as follows:

Hot PCIE-based flash drives serial attached SCSI solid state drives (SSDs);

Cold Traditional spinning rust hard disk drives (HDDs).

Given the increased availability and dramatic drop in price of flash-based disks and SSDs, most of us will have a few of these in play. AST is really just automating the movement of heavily accessed files between our hot SSD storage and our colder HDD storage.

This sounds great in theory but in practice it hasn’t been all that easy to set up or manage. Some vendors offer appliances that can do it for us but these can cause lock-in and they all come with their own costs and caveats. So really they are not an optimal solution.

Microsoft has taken everything it has learned and thrown it into an interface

Storage Spaces in Server 2012 R2, on the other hand, are the bees' knees, the ants' pants or any other insect that we can assign clothing to.

Microsoft has taken everything it has learned in making its cloudy Azure service work nicely and thrown it into an interface and back end. It will take your JBODs and your physically attached storage devices and allow you to pool them.

It will also allow you to create automated storage tiers in three easy steps. Although it is untested (and probably more than a little stupid), as well as definitely unsupported, you should be able to create point-to-point virtual private networks and then attach your remotely hosted device on Azure Server 2012 storage as an iSCSI storage space.

If you have local data centre silos, and the right links you could plausibly gain performance that is similar to that of your locally present HSM.

Next page: Pay the price

More about

TIP US OFF

Send us news


Other stories you might like