Feeds

Breaking with your back-up supplier is a sticky business

How to dissolve the glue

Top 5 reasons to deploy VMware with Tegile

So what can you do?

Two approaches seem possible.

One is to move to the new backup software in stages – server by server, say, or application by application. You still parallel-run in general but gradually move to single and clean backup software environments for each server or application. The migration ROI still takes a long time to see, probably years.

Another approach is doing all new backups in a way that delivers ROI faster while accepting that the old software will be around for a long time.

Each location would send changed data to a central site

One method is to install agentless backup software that uses your network and a backup server to vacuum up data from connected production servers and put it in its own – or a centralised – storage vault, which could be off-site for added protection.

Each location – a remote or branch office, say – would have a backup server and it would send changed data, hopefully de-duplicated, compressed and encrypted, to a central site, providing off-site protection for the remote sites.

You still parallel-run old and new backup software environments but you do not have to face the cost and difficulty of ensuring that the new software is current with each backed-up server operating system revision level.

The new backup software would exist separately from them and they could change their revision level without affecting the backup software's revision level. There would be no new or additional software versioning burden. There would be no superglue-like link between backup software level and host operating system version.

A minor benefit of this approach is that the production servers would not have to supply CPU resource to backup software running in them; those cycles go to the production apps instead.

There would possibly be license cost reductions as well, so long as the backup server you moved to had capacity license-type provisions and was not licensed by the number of backed-up servers. A good capacity scheme should be less expensive than a per-server license scheme.

Lastly, your overall system might be more secure as there would be no agents with access privileges into your production servers

Clean-up operation

You would then have two backup silos, as it were. One is for restore access, using the old backup software to access it, and does not have fresh content added to it. The other is for write (backup content addition) access and also read access using the new software.

This method of migrating from one supplier to another seems practical and cost effective. By avoiding the need to get new software agents the new backup environment will be less expensive to run than the old one.

For example, you could effectively reduce the number of old software licenses if you are able to coalesce the old backup software vaults into a single centralised vault, a single silo, with a single server used for access.

This would help clean up your overall software environment, reducing the number of software instances that have to be licensed and maintained.

You could also have a smaller number of staff skilled in using the old software and save expense on that front too.

Migrating from one backup supplier to another is never an easy option because backup storage vaults are needed for a long time after the data in them is created.

There are no silver bullets but a manageable transition can be yours if your new backup software creates its own storage vaults in a significantly less complex and expensive way.

The ROI from the migration can come in more quickly and free you from a costly, superglue-like lock-in to a scheme that is no longer fit for your purpose. ®

Beginner's guide to SSL certificates

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
AWS pulls desktop-as-a-service from the PC
Support for PCoIP protocol means zero clients can run cloudy desktops
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.