This article is more than 1 year old

Replication 101: How to make quick copies of your data

Best to keep a spare set

In art, a replicas does not have the same value as an original. But in computing, replicas are as perfect as the original if done fast enough.

In a virtualised data centre where resources scale up and down according to demand, the failure of a storage array can drastically hinder responsiveness. Replication can protect you against that.

Copying an individual drive's contents to another drive in the same array is called mirroring and is classed as RAID level 1. Its aim is to protect against a disk drive failure and the process is instantaneous.

Mirror, mirror on the wall

Writes for the source disk come in to the RAID array controller, which generates a simultaneous write to the second or target disk.The contents of the two disks are identical at all times.

Mirroring is, in effect, short-distance replication inside a drive array with no time penalty involved. Replication between drive arrays relies on the array software, or specific server-based storage drivers sending data copies from the primary or source array to a target array. If the source array fails, then the server-based applications can switch to the target array and carry on running.

A classic example is EMC's SRDF (Symmetrix Remote Data Facility), which copies newly written data data from one Symmetrix array to another.

Another is NetApp’s SnapMirror, with the mirroring term referring to inter-array replication and not RAID 1. Replication is certainly practical with these mature software products and setting up is pretty easy, although it can be complicated by putting in place more replicas to raise the level of disaster protection.

Long-distance call

In SRDF and similar products, the source array controller sends a message to the target array controller and tells it to write the same data. This can be done in such a way that the target array is always in synchrony with the source array, which is known as synchronous replication.

Within a data centre, synchronous array-to-array communication adds just a few microseconds to the total write transaction time.

As the source and target arrays get further apart, from campus distances to city-wide and even inter-continental distances, the round-trip transit time for the replicate data request and confirmation signal becomes longer and longer.

You don't need such a fast network link, which saves money

You can buy faster and more expensive network links, send only changed data blocks and compress the data, but eventually the speed of light becomes the limiting factor. The server-based application has to wait longer and longer for the replicated write to complete the further the distance between the two storage arrays.

How can you get around that? One way is to trust that the target array received the data and not wait for an acknowledgement. This is asynchronous replication.

The advantages are that you don't need such a fast network link, which saves money, and the application storing the data doesn't have to wait for a long-distance round trip to complete the write transaction.

Lost and found

The downside? If and when the source array fails, the most recently written data may not have been received by the remote array.

When server applications switch from the failed array to the target array they may find some data is missing and then have to restart or repeat transactions. You can have a recovery point objective of zero data loss with synchronous replication, but not with asynchronous replication.

Whichever replication scheme you choose has cost, time, and possible data loss implications, and these have to be factored into your disaster recovery choice.

In replication as in art, the more perfect the execution, the more you will spend. ®

More about

TIP US OFF

Send us news


Other stories you might like