The Register® — Biting the hand that feeds IT

Feeds

Time to put 'Big Data' on a forced diet

  • alert
  • print

There ain't nothing cheap about big storage

Free whitepaper – Hands on with Hyper-V 3.0 and virtual machine movement

Data is big business. These days they've even started calling it “Big Data”, just in case its potential for unbridled magnitude had escaped anyone. Of course, if you have Big Data you need somewhere to put it. Hence storage is also big business.

On the one hand this is a good thing, but that's just because several of my relatives work in sales in the storage industry - which means the commission they earn from selling another couple of petabytes of disk space can usefully be redeployed in buying me the occasional pint.

In general, however, storage is a bad thing if you don't use it sensibly.

A company I worked with a while back had a problem: their server estate was growing and their SAN was running low on space. The obvious question came up: what's the cost of adding another array to expand the space available? With disks considered a commodity these days, a nice cheap quotation was expected, but imagine the look of surprise when they were told: “It doesn't really matter what it costs to buy, there's no space in the server room to put it.”

So they asked themselves the question: can we use this stuff more wisely? And the answer was “yes” – with some relatively simple steps they could free up well over a terabyte of storage. And while that doesn't sound very much with regard to modern technology (I just picked a random IT retailer's website and found a 1TB drive for £53, for instance, and for that matter the Apple Time Capsule on my desk is a 3TB unit), it's a far bigger deal if that terabyte is enterprise-class storage, with super-high-speed switched Fibre Channel connectivity, in a high-availability configuration, with complex compression algorithms ekeing out every last byte of its available capacity.

Storage looks cheap in theory, but it's not if it's enterprise-class storage because of all the complex technology you need to wrap around it to make it usable and useful.

Do you even KNOW what you're storing?

In this case, the solution to the problem was to look at some of the items that were stored on the disks and decide that they really didn't need quite so many copies of the various backups of backups of backups that had materialised on the disks over the years.

And you know what? In the years I've been in IT, I've lost count of the number of times that clients have moaned about running out of storage space (and, more frequently, running out of hours in the day to run backups) but who have, when pressed to identify their data, been unable satisfactorily to explain just what happened to all their free space.

Data management in the average organisation is, frankly, appalling. In a way, though, that's understandable: unless you employ a team of storage Nazis to interrogate everyone regularly about their files, you really don't stand a chance of keeping tabs on your storage requirements.

And let's face it, if you're given the choice of deleting a document (and running the risk of needing it later) or hanging onto it (just in case), what are you going to do? Nobody ever got fired for keeping a document they believed might be needed one day, but I bet many have been fired for doing the opposite.

So how do you trim down? Here are three questions to ask yourself.

Data: How can I squeeze as much as possible of it onto the disks?

As I’ve already mentioned, the disk itself is one of the cheaper, more commoditised elements of the storage system; it's all the SAN fabric and controllers that you wrap around it that ramps the cost up.

If you want to compress data, the obvious way to go is to employ a compression algorithm to encode more data into less space. And since the compromise is that compressing stuff slows access times down, you then employ expensive ASIC-based compression to speed it up again (“ker-ching!”). De-duplication is also a complete no-brainer, particularly if you live in a virtual world – for instance, SAN controllers that store a single physical copy of something and present it as multiple virtual entities do a spectacular job of optimising the storage of, say, dozens of Windows VMs that are all packed with bazillions of identical system files.

At the very least, then, turn on the optimisation features of your storage hardware. You've paid for them, after all, so use them.

Free whitepaper – Hands on with Hyper-V 3.0 and virtual machine movement

O RLY?

If you think deduplication is a no-brainer, you've just never tried to implement it. Like Fat Data itself, it's a tool that can be used well (reducing storage cost) or poorly (killing system performance), and users deserve to be educated about the difference.

1
0

Big Data

I would hope that an article on the Register talking about 'Big Data' would be jumping all over the industry for their recent hype machine and latest fad. 'Cloud' is so last year, 2013 has obviously been designated 'Big Data' year.

It's amazing that that amazing breakthrough happened in Big Data late last year...oh it didn't.

Well maybe no-one has had massive amounts of data before...oh they have.

Well maybe there wasn't a way to store and manipulate it before...oh there was.

Data warehousing has been around since I can remember, if a company has only just realised they have large quantities of data just because of a flashing Intel advert then they must have been hiding out somewhere dark.

'Big Data' the worst of the buzzwords so far...

2
1

Re: Stopy crying.

"I have 2.5 terabytes on my server at home, and If I can aford it"

Out of interest, is that available space or capacity?

1
0
Anonymous Coward

Big Data Article?

Not.

Epic fail.

1
0

HSM

Don't simply blindly expand the capacity of disk on your SAN.

If you are dealing with big datasets (and I do) you should look at a Heirarchical Storage Management system.

You can select a secondary tier of cheaper SATA disks, or a tier of MAID (didks which automatically idle down when not used) and a tier of tape in an automated library.

Less often used data will be pushed to tape automatically.

0
0