Related topics
  • ,
  • ,
  • ,

Breaking with your back-up supplier is a sticky business

How to dissolve the glue

So what can you do?

Two approaches seem possible.

One is to move to the new backup software in stages – server by server, say, or application by application. You still parallel-run in general but gradually move to single and clean backup software environments for each server or application. The migration ROI still takes a long time to see, probably years.

Another approach is doing all new backups in a way that delivers ROI faster while accepting that the old software will be around for a long time.

Each location would send changed data to a central site

One method is to install agentless backup software that uses your network and a backup server to vacuum up data from connected production servers and put it in its own – or a centralised – storage vault, which could be off-site for added protection.

Each location – a remote or branch office, say – would have a backup server and it would send changed data, hopefully de-duplicated, compressed and encrypted, to a central site, providing off-site protection for the remote sites.

You still parallel-run old and new backup software environments but you do not have to face the cost and difficulty of ensuring that the new software is current with each backed-up server operating system revision level.

The new backup software would exist separately from them and they could change their revision level without affecting the backup software's revision level. There would be no new or additional software versioning burden. There would be no superglue-like link between backup software level and host operating system version.

A minor benefit of this approach is that the production servers would not have to supply CPU resource to backup software running in them; those cycles go to the production apps instead.

There would possibly be license cost reductions as well, so long as the backup server you moved to had capacity license-type provisions and was not licensed by the number of backed-up servers. A good capacity scheme should be less expensive than a per-server license scheme.

Lastly, your overall system might be more secure as there would be no agents with access privileges into your production servers

Clean-up operation

You would then have two backup silos, as it were. One is for restore access, using the old backup software to access it, and does not have fresh content added to it. The other is for write (backup content addition) access and also read access using the new software.

This method of migrating from one supplier to another seems practical and cost effective. By avoiding the need to get new software agents the new backup environment will be less expensive to run than the old one.

For example, you could effectively reduce the number of old software licenses if you are able to coalesce the old backup software vaults into a single centralised vault, a single silo, with a single server used for access.

This would help clean up your overall software environment, reducing the number of software instances that have to be licensed and maintained.

You could also have a smaller number of staff skilled in using the old software and save expense on that front too.

Migrating from one backup supplier to another is never an easy option because backup storage vaults are needed for a long time after the data in them is created.

There are no silver bullets but a manageable transition can be yours if your new backup software creates its own storage vaults in a significantly less complex and expensive way.

The ROI from the migration can come in more quickly and free you from a costly, superglue-like lock-in to a scheme that is no longer fit for your purpose. ®

Sponsored: 10 ways wire data helps conquer IT complexity