Original URL: https://www.theregister.com/2011/03/03/snia_data_protection_part_2/

Introduction to data protection

Backup to Tape, Disk and Beyond

By Marcus Schneider, SNIA Europe member

Posted in Channel, 3rd March 2011 15:00 GMT

Deep dive El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA will deliver a comprehensive introduction to basic storage networking concepts. The first article explored data protection. This second one looks at building and operating a backup system.

Part 2: Overview of a Backup Infrastructure

Part 1 discussed the fundamental concepts in data protection. Here we look at how you might build and operate a data protection infrastructure.

Components

Let’s take the backup of an accounting application server working with structured data. On this machine we have an agent, a piece of software that belongs to the backup application and that manages the collection of data and meta data as requested by the backup server. If this was an SAP application the agent would know exactly what SAP data looks like and could be capable of quiescing the application to create a consistent state so the data is ready to be backed up.

The backup server is where all the management of the backup software takes place and where the catalogue, an overview of all the backups that have taken place, is stored. The actual data movement is done by the media server or storage node. In this instance the media server/storage node reads the data from the agent and writes it to the backup target or storage node. The agent, media server and backup server are all parts of the backup application. The backup target could be tape- or disk-based. If it is disk-based it could, for example, be a deduplication appliance.

Different backup approaches

The elements outlined above can be combined in a number of different ways to build a backup infrastructure; your budget and performance requirements will also affect the design. Let’s have a look at some of the main approaches to backup.

In a local backup the target is directly connected to the application server and both the agent and the media server run on this application server. This is also connected to the backup server through the LAN, although here this network is only used for the meta (or catalogue) data; the physical backup data is written locally to the target. This means that while the backup is taking place there is a significant load on the application server.

Typically you do not want to use one backup target for several applications. By putting the media server on the same machine as the backup server and connecting the backup target to that server, you allow the agent on the application server to transport the physical data over the LAN to the media server, and from there to the backup target.

You can use a wide range of technologies to move data, including CIFS, NFS, iSCSI, and NDMP. It is however important to bear in mind that during backups and restores the performance of the application server and of the LAN will tend to suffer no matter what. So how do we overcome this?

You directly attach a backup target to a media server so that the application server storage can replicate directly to the media server storage. This will ensure that only meta data travels on the LAN linking the backup server. Now, as we perform the backup from the replicated data rather than the application server, there is no extra load on the application server.

Backup Schedules

We have looked at the physical aspect of a backup strategy; what about the logical side? This is where full, incremental and differential backups come into play. A full backup means you write a complete copy of the data to a target; the upside here is that when you restore you have all the information you need. But a full copy is large and takes up significant resources on the target, the source and the network. Therefore you don’t want to go down this path every time you backup.

Instead you should consider an incremental backup, i.e. only copying the changes that have occurred since your last full backup. And the next time you will only copy the data that has changed since the last incremental backup. This approach is much faster than carrying out full back ups each time as you will copy less data. The downside is that when it comes to restores, you must first restore the last full backup and then all the incrementals since then, and so the restore process takes longer.

The third option is differential backups, a combination of the first two. In this scenario you backup all the data that has changed since the last full backup every time you make an incremental so, when restoring, the last full backup coupled with the last differential are enough. Differential backups are slightly slower than incremental ones but the restores are faster.

File vs. Block-Level Backups

Both approaches have pros and cons. File-level backup is pretty simple and straightforward. However with this model each file is backed up when it changes, so even small changes to large files lead to large backups. Also, if a file is open during the backup sometimes it will not get copied.

With block-level backup only changed blocks are copied; although this can making backing up quicker, it requires much more client-side processing and this can have an impact on overall performance.

Synthetic Full Backups and Incremental Forever

It is possible to combine the advantages of incremental and full backups through a synthetic full backup. With this approach you make incremental backups at regular intervals, and synthetically combine these to create a full backup without involving the application server or its original data.

This way eliminates the need for full back ups even though, when it comes to restores, you have a full back up at hand for a fast restore and only a relatively small number of incrementals.

The Right Backup Target

Tape

Tapes have been used in back up environments longer than any other medium, typically in tape libraries, large tape automation systems; tapes can be physically removed and transported and this is a big advantage compared to disk. But the appeal of tape doesn’t end here; once data is stored on tape it doesn’t use any power unless it’s accessed, and has a lifespan of up to 30 years. This, and its very low price tag, make tape an ideal choice for archiving and disaster resilience. So although its demise has often been predicted, tape is still playing a vital role for back up.

As for performance, tape is a sequential medium, i.e. you cannot randomly access data. However streaming tape is very fast; LTO 5 offers a compressed transfer rate of up to 360 MB/s (1,296 TB/hour). The challenge unfortunately, is to feed it with enough data, at the right speed, to use this advantage.

Disk

As the price/capacity ratio continues to drop disk has become an attractive target medium as well. Its main advantage is the random access as this helps speed up restores. Also, in contrast to tape, disks have no mechanical interfaces which can easily fail.

Choosing the right disk for a B2D or D2D back up environment includes looking at interfaces (Fibre Channel, SAS, NL SAS (Nearline SAS, a high-capacity, low-speed disk with a SAS interface) or SATA, and whether to connect through SAN or NAS. Cost analysis shows that combined disk/tape stacks offer very good backup/restore performance at an attractive price point; a best practice is to back up to disk first and then write a second copy to tape (B2D2T).

VTLs

However, such environments can become quite complex and expensive to manage. Virtual Tape Libraries or VTLs are a way to hide this complexity and combine disk and tape in one system. By virtualising the tape interface you get fast backups and restores while still embracing the benefits of tape. VTLs also allow remote replication and data deduplication without having to change the traditional backup set up as described above.

CDP

CDP or Continuous Data Protection is the concept of tracking every change to data and enabling a restore to any point in time. As you are continuously backing up there are no dedicated backup windows. In theory this should enable RPO = 0. In practice it doesn’t because you still need a significant, relevant point in time to go back to.

So although the price tag is high, CDP offers capabilities that are partially not necessary, hence the trend to go for “near-CDP” systems, which offer a high number of usable recovery points, helping to significantly bring down the RPO. Near-CDP is usually implemented through snapshot and replication technology; it is an effective choice for critical data with high service level requirements.

The Importance of Data Deduplication in Data Protection

Data deduplication, a disk-based technology, is the replacement of multiple copies of data at variable levels of granularity, with references to a shared copy in order to save storage space and/or bandwidth, especially in back up environments. It is a technology used to find identical blocks of data at a sub-file level and store them only once. By deduplicating data you can easily save more than 90 per cent of your target capacity if you are writing full backups daily.

Saving this much capacity also helps replication as this becomes much faster and requires much less bandwidth. Data deduplication does come with performance tradeoffs as it needs quite some time for processing.

Next Steps in Data Protection

Where do we go from here? Here are some best practices to identify the most effective data protection strategy for your organisation:

- A new box or a new piece of software might not be the answer
- Assess your current protection environment for strengths and weaknesses
- Assess risk vs. cost vs. complexity. Include your “internal customers” in your decision-making process
- Do not use the same level of protection for all your data: use different media, store in different locations
- Match your RPO and RTO goals with the right technologies Performance, especially restore performance, is always a key

When in doubt, call the experts!

Bootnote

This article was written by Marcus Schneider, SNIA Europe Board member, and Director of Product Marketing at Fujitsu for Storage Solutions.

For more information on this topic, visit: www.snia.org and www.snia-europe.org. To download the tutorial and see other tutorials on this subject, please visit: http://www.snia.org/education/tutorials/2010/fall

About the SNIA

The Storage Networking Industry Association (SNIA) is a not-for-profit global organisation, made up of some 400 member companies spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organisations in the management of information. To this end, the SNIA is uniquely committed to delivering standards, education, and services that will propel open storage networking solutions into the broader market.

About SNIA Europe

SNIA Europe educates the market on the evolution and application of storage infrastructure solutions for the data centre through education, knowledge exchange and industry thought leadership.  As a Regional Affiliate of SNIA Worldwide, we represent storage product and solutions manufacturers and the channel community across EMEA.