Feeds

Application recovery: Whose job is it anyway?

A stitch in time saves the crying

Internet Security Threat Report 2014

When it comes to supporting business activities, the primary role of IT is to ensure that the applications on which the organisation depends are available and operating at an acceptable level of service. Beyond this there is the unsung, but absolutely essential, matter of protecting both the application and the data it collects, creates and stores over their collective lifetime.

Traditionally, more emphasis has been placed on the protection of data than applications, with the standard approach being backup to tape. This takes time to perform (the ‘backup window’) and, to ensure data consistency, it often requires that access to the data be suspended while the backup takes place.

This can be quite a challenge, particularly for transactional applications (i.e. most of them) – it requires stopping the application program, getting all transactions to complete and then initiating the protection process. Once completed, the application is restarted and thereafter the users can begin to use the software again as part of their business processes.

When recovery of data is required, the reverse process would take place with administrators seeking to restore data sets from one or more tapes. Both of these processes can take a considerable amount of time, even in the best of cases.

Today’s pressures for data to be available for longer have put additional constraints on the backup window, at the same time as reinforcing the need to recover data quickly. As a consequence, a number of alternative technologies have sprung up for use in data protection and recovery processes. These include backing up to disk, Continuous Data Protection Systems (CDP), point-in-time copies (snapshots), and replication solutions. Each of these offers different capabilities and will be suited to a range of data protection scenarios.

Despite these advances, the application still needs to be in a logically consistent state prior to the data protection process commencing, both to enable a recovery process to be operated and to leave users in a position to carry on work.

Today it is becoming unacceptable to bring down some applications to allow a backup, snapshot or replication process to take place, so another means must be found to ensure that a self-consistent data set is protected. This challenge is becoming further exacerbated as complex, composite applications are used that may run using data from multiple systems, some of which may extend beyond the borders of the enterprise.

This raises the question: who is responsible for ensuring application and data consistency in data protection and recovery scenarios? As applications become more complex architecturally, and in line with business demands for high availability, IT infrastructure administrators and application developers/implementation consultants need to find ways to work together to ensure that rapid application protection and recoverability are built into systems right from the word go.

From our research, we know how much of a challenge this can be. In a study conducted last year, it became clear just how much of a gamble organisations were prepared to take with the reliability of their IT systems. But more recent work has shown us the gulf between different factions in IT, which can only exacerbate the situation. We’re not going to attempt glib answers - although if you have any, do let us know - but we can say that all the tools in the world won’t help if the right pieces aren’t being put in place from the outset.

Where to start? Probably the developers themselves, who will need to know what the data protection/recovery solution needs from it so that the storage administrators can ensure that the desired quality of service is achieved with minimum risk and, naturally, the minimum of cost. It’s time to make the planning for recoverability an integral part of the system design and build process, and development is as good a place to start as any.

Freeform Dynamics Ltd

Beginner's guide to SSL certificates

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.