This article is more than 1 year old

What you need to know about keeping your cloud data safe

Seeking shelter from the storm

The first reaction many corporate users – even those who are quite technically aware – have when considering a migration to cloud computing is to worry about data security.

It is a fairly natural emotional response of course; you are effectively surrendering a kind of ownership of your data over to a third party.

So what if a cloud service were to fail? What if a cloud were compromised by some security vulnerability?

What if the cloud hosting provider failed to partition a multi-tenant set of instances properly resulting in some kind of crossed-line style data chatter?

Which leads to the question: which came first, security or storage? From an IT history perspective, the answer is obviously storage but the two are now so closely interlinked that it is hard to imagine a time when we didn’t talk about them both in the same breath.

So how can you guard against losing your data? Should firms strategically place copies of their data in different geographic regions and with different suppliers?

Wake-up call

Should they retain a copy of data at their own or a trusted partner's premises? Is there some cloud security epiphany that we should be waking up to?

Actually, this should not be thought of any epiphany. A lot of it is common sense. It is not an altogether ridiculous idea to store some copies of our cloud data on-premise or with a trusted partner.

But we must realise that a lot of high-value data is live, transactional, real time, current and very dynamic. Storing legacy data for regulatory and compliance reasons is one thing; thinking about how to protect the live frontline data stream is a different matter.

A cloud data attack (not an official piece of terminology) might manifest itself as a fairly meaty blow where core data streams, cloud-based backups and virtual-machine configurations are smashed through.

DRaastic measures

Any IT sales consultant worth his or her salt would at this point try to tell us that we should have looked at the disaster recovery as a service (DRaaS) options that were available here.

But DRaaS is only as good as the encryption layers that any single customer agrees to engineer it with. Plus it is still cloud based, so that is inherently where the problem lies, isn’t it?

To say that the cloud is any more or less secure than traditional terrestrial computing is a misconception. Every firm’s core data centre is connected to the web and the outside world in any number of electronically charged channels.

Therefore risk always exists. A cloud provider generally speaking always has more security layers in place than any single company could aspire to bring online. So what is the problem?

James Brown, director of solution architecture for EMEA at Alert Logic, says that keeping software up to date is a critical component of protecting applications on cloud platforms. As obvious at it sounds, we need to be sure we are running security software that is native to the cloud environment at hand to help detect and prevent breaches.

Brown urges us to look at the shared security model that many cloud hosting providers offer and at tools such as intrusion detection systems, vulnerability detection, web application firewalls and log management.

These kinds of services are available in the cloud and can even be backed by a managed 24x7 security service so that you do not need to hire those skills internally.

As we have said, keeping portions of cloud backup data replicated on a firm’s own premises or those of a partner is not out of the question, but unless the firewall and log management layer of protection is in place, what is the point?

David Lomax, director of sales engineering EMEA at Barracuda Networks, insists that most of the customers his firm speaks to who are looking to move to the cloud have specific security-related concerns.

“That question normally has a few parts to it: a) edge security for applications and servers; b) internal cloud networks security; and c) data security,” he says.

“In cloud solutions, the norm just didn’t fit well"

“Most of these concerns are nothing new, even for owned or hosted data centres. Edge security for web applications has typically been accomplished using a WAF [web application firewall]. Back-end network security has relied mainly on the physical security of the data centre, while data security was part of your internal backup and disaster recovery strategy.

“In cloud solutions, the norm just didn’t fit well. Having virtual WAFs and NG Firewalls allows companies to extend the same level of security to cloud solutions. Barracuda’s Azure and AWS virtual appliances have been key in mitigating ITSEC department fears and concerns over moving to a cloud solution.”

Downsides of backup

Hosting providers advocate cross-cloud backups (running as frequently as nightly if needed) for any business, regardless of size. It is important to point out, however, that while doing backups is fairly simple, there are a lot of "gotchas" that come into play.

This is not drag-and-drop computing. After file library classification and initial configuration, cross-cloud backups need to be validated with a test run and various layers of incremental scripting.

So after overcoming those technical hurdles, that’s it, right? Once the backups are verified and set up to run you can sit back and pour yourself a drink?

Sysadmin guru Trevor Pott looks at this issue tangentially and remarks that if Amazon alone were to run a full-bore backup and recovery for everyone's workloads (say backup to tape), it would overwhelm the current or future envisaged operation.

“How could it cost-control that? Especially once people wanted files back not because of hacks but because some Mr McFumblefingers deleted something or changed the wrong file,” he says.

Backup, it would appear, is no open and shut case.

The art of restoration

Quicloud, a cloud consultancy based in Louisville, Colorado, warns that just about the most important, and probably most overlooked, part of this process is yet to come: verifying that you really can restore your data from your backups.

The quicloud technical blog advises: “[you should] make an agreement with your IT lead that on a (relatively) sane day in your IT department, you'll have a drill whereupon the manager will make local backup copies of a couple of non-critical files on your live server, and then delete them from the live server, mimicking a catastrophic file loss”.

Back to the core question: how can you prevent data loss when a cloud fails? Or more specifically, how can you prevent data loss when a cloud fails and your data streams are extremely dynamic and highly transactional?

Internal mechanics come into play here, including elements such as software backup agents. Their job is to handle backups of mission-critical servers as often as, for example, every 15 minutes.

These software backup agents look only for new or changed files, which will theoretically minimise the backup window required at any time.

Let’s also remember that inline reduplication, which removes redundant data (copies or out-of-date data) from the backup process before it is written to the backup device, should be an inherent capability to ensure that the cleanest possible data is backed up at any moment.

Inline reduplication not only makes the data cleaner but is cost effective and more efficient than post-process deduplication.

But remember that inline deduplication essentially represents a wall or a path between a firm’s servers and the backup disk systems at the cloud hosting data centre or elsewhere, so some speed degradation can occur unless we are talking about a precision-engineered backup environment.

“Much of the data that a business deals with is repetitive and contains a lot of redundancy. Ensuring that data deduplication is in place can reduce the amount of data needing to be backed up by up to 80 per cent,” says Clive Longbottom, founder of analyst house Quocirca.

“This lowers the costs of cloud storage – and also makes restore faster after any problems. The customer can either carry it out as part of its overall data policy, or use a provider that builds target-based deduplication into its services.”

Barracuda Backup promises to minimise the time for completion of the full backup and replication process with advanced variable block deduplication. This analyses the data type and chunk size, setting a block size to obtain the greatest level of deduplication.

Full recovery

From here we move to data recovery. Or more specifically, we move to granular-level file and message recovery. A virtual environment relies upon image-based restores as opposed to the bare metal restores of physical servers.

Image-based restore is at its most painless where a backup service offers the option to ship an appliance pre-populated with the data in the cloud. Higher-level image-based restore also includes preloaded configuration settings to get the user back up and running faster and with the least stress.

Inherent to these processes is encryption throughout. The US government recently sanctioned the 192bit AES encryption standard as the preferred method for locking down its most secret information. Even higher-grade backup will ship with even more secure 256bit AES aggressive encryption levels.

As Longbottom rightly says, backup and restore still remains a black art for many. This becomes clear when companies accept that data is now so central to their business that it should be protected.

But it all comes back to veracity of the backup process so that the cloud restore functionality works the way it is supposed to.

Testing, testing

Backup and restore procedures need to be tested regularly if we are to succeed in preventing data loss when a cloud fails. If you thought that merely running a cloud would be complex, then keeping it alive and secure and “backuppable” at a granular level is another matter entirely.

Everett Dolgner, director of replication product management at Silver Peak, says that preventing data loss when a cloud fails can be difficult to address, as you are at the mercy of the provider.

“It is imperative to understand the redundancies in place with the provider – where your data is located and how it is recovered,” he says.

“Ideally, any mission-critical data will be available in multiple locations to protect against failure. If your only copy of data is with a cloud provider, and the provider goes down or experiences a catastrophic failure, you will lose it.

“When possible, yes, firms should retain a copy of data at their own premises or those of a trusted partner. Using a cloud gateway device, or local storage that tiers to the cloud, allows you to have data available locally when the cloud provider experiences downtime.”

The argument for specialist providers at every step of the cloud equation is not that hard to make. Moreover, the anatomy of cloud security, backup and restore is probably more complex than many would imagine, given the slick offer that comes from so many hosting providers who promise to just “spin up an instance” for us when needed.

Cloud backup is serious, so be sure to take it seriously. ®

More about

TIP US OFF

Send us news


Other stories you might like