Feeds

Back up all you like - but can you resuscitate your data after a flood?

Trevor Pott learns a salutary lesson in data restoration

Application security programs and practises

When it comes to backups two sayings are worth keeping in mind: "if your data doesn't exist in at least two places, it doesn't exist" and "a backup whose restore process has not been tested is no backup at all”.

There is nothing like a natural disaster affecting one of your live locations to test your procedures.

I have just had to deal with this; let's take a look at how.

Pipe dreams

To have a discussion about backups we need to start with what we are backing up and why.

The client in question has two sites, one in Edmonton and one in Calgary. Each site is serviced by a fibre pipe – theoretically capable of 100Mb in emergencies but throttled below that to meet our ISP agreement.

If we keep the cumulative usage between both sites below 'X'Mbps (measured at the 95th percentile) we can use all the bandwidth we want. No caps, no per-GB billing. It is a nice, predictable cost that we can easily manage.

More critically, in case of "oh no!", uncapping those pipes so that each can use the full 100Mb takes a single command.

Each site receives large quantities of information from customers for use at that specific location. This information is made highly available to deal with hardware failure but is not replicated offsite.

We spec our bandwidth only slightly above what we need to handle inbound data; we could never afford the cost of cloud storage, even if we are sending to one of our own datacentres.

Since we have all this infrastructure in place to meet local needs it seems silly not to host our public-facing websites and IT services on our own infrastructure. We have private clouds at each location, gobs of storage, UPSs and a fat pipe. It isn't exactly Amazon, but it should be workable.

Replicate, replicate

None of the databases for our public websites can be set up for live replication because that would require rewriting code to accommodate it. For various reasons that won’t happen any time soon.

So backups are down to cron jobs running on each MySQL server to create regular database dumps, zip, encrypt and then fire them off to our file server.

At the same time the codebase for each web server undergoes a similar backup. The file servers in question are Windows systems running distributed file system replication (DFSR), which does a marvellous job of replicating the backups.

Each site has two identical file servers in a cluster and they both have a copy of the files. The files are then fired across the WAN to the other site where it lives on a pair on that site's file server cluster as well. At this point, I'd say we are pretty well immune to hardware failure.

A backup server at the head office runs a truly archaic version of Retrospect that creates versioned backups to protect against Oopsie Mcfumblefingers, malware or other such issues that might delete the backups in the DFSR share. So anything that is placed in the backups directory on either site is automatically replicated to two systems per site and versioned.

Potential personally identifiable information is encrypted – both in the database and in the rarballs (files that  have been compressed or packaged using rar compression)  – and none of it leaves corporate control.

So far, so good; better solutions certainly exist, but with no budget I think it gets the job done.

Eight steps to building an HP BladeSystem

Next page: Frying tonight

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
Amazon Reveals One Weird Trick: A Loss On Almost $20bn In Sales
Investors really hate it: Share price plunge as growth SLOWS in key AWS division
US judge: YES, cops or feds so can slurp an ENTIRE Gmail account
Crooks don't have folders labelled 'drug records', opines NY beak
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
Manic malware Mayhem spreads through Linux, FreeBSD web servers
And how Google could cripple infection rate in a second
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
The triumph of VVOL: Everyone's jumping into bed with VMware
'Bandwagon'? Yes, we're on it and so what, say big dogs
prev story

Whitepapers

Top three mobile application threats
Prevent sensitive data leakage over insecure channels or stolen mobile devices.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.