This article is more than 1 year old

Storage is boring, right?

Understanding the Sherpa layer of IT

Workshop In the spirit of calling a spade a spade, it is fair to say that computer storage is generally perceived to be quite dull – in Douglas Adams terms it would qualify as ‘mostly harmless’.

While this is a bit of a shame for people who look after storage, backups and so on (let’s face it, the job description is never going to break the ice at parties), it also sets expectations: storage should just work without needing too much intervention.

This is more of a challenge than some might think, not least because disk technologies are still largely mechanical. The rest of IT may have long since succumbed to the age of silicon, but storage remains the last bastion of the Victorian age. It actually, really could still be steam-powered. From an engineering standpoint this leads to some quite fascinating discussions – for example that the main limiting factor in physical disk size is the motor.

The downside is reliability. Disk failure is a common theme in most IT environments, and indeed some common storage technologies (RAID for example) exist largely to counter the fact that disks can, and will crash without warning (pdf).

Even when storage ‘just works’, it is has a number of hurdles to overcome. First and foremost comes data growth. When we conducted a server infrastructure survey (pdf) with The Register last year, data growth came up as number one driver for updating the server estate, never mind the storage! Data growth is relentless, and dealing with it isn’t made easier by the fact that few organisations are blessed with an up to date, well-managed storage environment.

We can all blame the technology of course, but data duplication and fragmentation remains common themes, sustained by many organisations having a ‘keep everything’ policy when it comes to electronic information. As well as being most likely illegal, this puts additional burden on the storage infrastructure, not to mention the people and processes which need to work with it.

Perhaps the outside-in view isn’t all that wrong when we think about the service storage needs to provide. Firstly, its role is to deliver data to applications and users consistently and efficiently: that is, as and when needed, at the required levels of performance (measured in IOPs), at an appropriate cost.

Second, storage should also be able to recover from failure situations. It is one thing when things are going right; quite another if things go wrong. Here we can think about backup and recovery as well as the ability to replicate between storage arrays, and indeed across sites.

Finally storage needs to be manageable in a way that suits the people trying to manage it. This is not just about having visibility on what storage exists, but also to respond to changing conditions and changing requirements, preferably as automatically as possible.

More about

TIP US OFF

Send us news


Other stories you might like