Feeds

Storage is boring, right?

Understanding the Sherpa layer of IT

  • alert
  • submit to reddit

High performance access to file storage

Some ‘information assets’ are more equal than others

Responding to these needs requires more than buying in a bunch of disks. For a start, storage arrays tend to fall into one of two categories – “high-end” for very expensive, high-performance disks, and “mid-range” for the rest (you don’t hear of “low-end” storage arrays). And, more recently, a third category of disk has emerged, namely solid state disk (SSD). While SSDs might suggest the beginning of the end for spinning disks, they are currently still more expensive than disks – they do have performance characteristics exceeding even the fastest high-end disks, however.

Given these options, deciding what data should go where is quite a skill, particularly as data characteristics change over time. A well-managed tiered storage set-up would match the ratio of high and lower cost storage in use, with the requirements of the data at any given time.

An additional dimension is to ensure appropriate storage and data availability. These are generally achieved by placing a copy of the information in some other, preferably safe, place either online (for example, via a second array to which all data is replicated) or offline (for example on tape, in a fire-proof safe or at an off-site storage facility).

Deciding between all the options and coming up with an appropriately architected storage environment is not trivial. Neither are things standing still – not only are storage technologies evolving, but so too are other areas of IT, upon which storage depends. It’s worth homing in on a few developments, to illustrate the point.

Not least of course, we have virtualisation. Server virtualisation may be the buzz-phrase right now, but its growth places new demands on how storage is built and delivered; the ease at which new servers and whole systems can be provisioned increases the risk of storage bottlenecks, as does the accompanying fluctuation in demand.

To counter this, we have of course storage virtualisation – which enables storage resources to be treated as a single pool, and then provisioned as appropriate. The phrase in vogue at the moment is ‘thin provisioning’, in which a server or application may think it has been allocated a certain disk volume, but in fact the storage array only allocates the physical storage required up to the specified maximum (which may never be reached). This makes for a far more efficient use of storage.

Speaking of efficiency, another trendy term is de-duplication, in which only variations in files or disk blocks are retained, transferred, backed up or whatever, rather than – ahem – duplicating everything. For the non-initiated, this one does sound a bit of a no-brainer – but the fact is that de-duplication can get quite complicated. An index needs to be maintained of everything that is being stored or backed up, so that files can be ‘reconstructed’ as necessary for example.

Other storage related developments worthy of note are on the networking side with iSCSI (bringing together block and file storage, aka storage for databases and unstructured content respectively), and at the higher end, the merging of data and storage networking using 10 Gigabit Ethernet. Meanwhile, in storage software, ‘merger talks’ continue between backup and archiving (they’re both about data movement after all).

Taking an architectural view of storage

All such developments share a common theme, that of convergence: with a following wind and from an interoperability perspective at least, things should get a bit simpler. But making the most of them still requires an architectural overview of the storage environment as a whole.

If you haven’t got one, where should you start? As we have already acknowledged, few organisations have the luxury of starting from scratch. But if you can build a clear picture of your storage requirements (remembering not all information is created equal – the 80/20 rule can be used here), together with a map of what capabilities you already have in place, you’re already halfway there.

From here, the next step is to produce the map of how you’d like your storage capabilities to look, based on your information needs and your broader infrastructure plans, for example vis-à-vis any intentions you might have around virtualisation. Given the plethora of options, each of which comes at a cost, storage planning will always be a compromise between functionality and affordability.

As a result it is worth thinking about how storage technologies might be used in tandem. For example, while it still isn’t cheap (though it’s getting cheaper), implementing de-duplication might provide immediate savings in terms of bandwidth and latency reduction. However, by considering its impact a little more broadly, the bandwidth savings may now allow (for example) data replication to another site, making disaster recovery possible whereas in the past it was not.

Storage then, is like a team of Sherpas; it does the heavy lifting so the rest of IT can make its way up the mountain without having to worry about the provisions. But it needs to be designed for the long haul, if it is to deliver on the value it promises. This may be difficult to do. But it is anything but boring.

High performance access to file storage

More from The Register

next story
Report: Apple seeking to raise iPhone 6 price by a HUNDRED BUCKS
'Well, that 5c experiment didn't go so well – let's try the other direction'
Video games make you NASTY AND VIOLENT
Especially if you are bad at them and keep losing
Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
Up, up and away in my beautiful balloon flying broadband-bot
Microsoft lobs pre-release Windows Phone 8.1 at devs who dare
App makers can load it before anyone else, but if they do they're stuck with it
Nvidia gamers hit trifecta with driver, optimizer, and mobile upgrades
Li'l Shield moves up to Android 4.4.2 KitKat, GameStream comes to notebooks
Gimme a high S5: Samsung Galaxy S5 puts substance over style
Biometrics and kid-friendly mode in back-to-basics blockbuster
AMD unveils Godzilla's graphics card – 'the world's fastest, period'
The Radeon R9 295X2: Water-cooled, 5,632 stream processors, 11.5TFLOPS
Sony battery recall as VAIO goes out with a bang, not a whimper
The perils of having Panasonic as a partner
NORKS' own smartmobe pegged as Chinese landfill Android
Fake kit in the hermit kingdom? That's just Kim Jong-un-believable!
prev story

Whitepapers

Mainstay ROI - Does application security pay?
In this whitepaper learn how you and your enterprise might benefit from better software security.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.