This article is more than 1 year old

Data protection is best managed from the centre

Become the ruler of all you survey

Security people talk of an attack surface to describe exposure to malware and hacking. The bigger the attack surface, the more at risk you are.

Data is the fuel of a new industrial revolution, powering business changes such as digitalisation and advances in fields such as machine learning. The greater our reliance on data, the greater becomes our exposure to security risks. If data is lost, stolen or unavailable, or if we break the rules of stewardship, we become vulnerable.

In addition the European Union’s General Data Protection Regulation (GDPR), which comes into force in May 2018, introduces new requirements for the handling of personal data and fines for its mishandling. The more systems you have to protect, the greater your surface area for data loss and falling foul GDPR.

The way data is used and stored is changing, and the rules governing its protection are moving on from relatively simple backup and recovery to a sharper focus on data management.

How do we achieve the necessary level of management? The answer is through visibility and control, which is achieved by an overall view of the data estate. Managing the protection of your data from a central vantage point is better than protecting individual data silos.

Central management of distributed resources is a well-proven way of operating in business. We take it for granted that as we move from one department to another in an organisation the general procedures and facilities will be dependably the same.

Consistent internal operations mean better cost control – think central purchasing for example – and more efficiency.

If businesses take a centralised approach to physical office estate management such as car fleets, office supplies, banking and more, why is data protection not treated in the same way? Instead what we often see is uncoordinated and varied practices geared towards protecting individual data silos.

The new wave

The old way of providing data protection – basically backup and archive – is focused on data sources or silos, and on-premise ones at that: a few relational databases and file collections plus data warehouses and BI systems.

Each of these is protected using backup software or systems, with for example deduplicating disk-based target arrays or tape-based libraries. The setup could include, for example, purpose-built backup appliances such as Data Domain, Quantum’s DXi and Exagrid.

But a newer and more diverse work environment has emerged. This spans new endpoints such as devices and laptops, and wraps in off-premise infrastructure services such as Amazon’s S3 data stores and software-as-a-service platforms such as Microsoft’s Office 365.

The data formats have expanded, too, from purely relational to unstructured formats stored using NoSQL databases. Data volumes have grown hugely, exacerbating the problems.

Traditional data models trying to adapt to this new world are inhibited by their use of legacy software and design. But newer models are emerging from companies such as Rubrik and Druva.

These are either on-premise systems that typically include hyper-convergence and employ systems such as Hadoop, or they are cloud-based models that bring the benefits of cutting hardware cost and allowing elastic scaling qualities.

A bird’s eye view

The changing landscape has brought an evolution in technology. A new set of data protecting suppliers are pushing the idea that you need to look at your data in its entirety and decide where, how and how often to protect it.

Additionally, factors such as GDPR render critical the need to know more about your data – where it is, who has access to it and how is it being used. Some vendors are going so far as offering additional data management capabilities, such as copy data generation and reclaim, data tiering, eDiscovery and more.

We’re seeing suppliers offer overall data management, control and visibility as well as data protection. They have the ability to select different protection methods – backup, clone, snapshot, replication and so on – based on the differing protection needs of data subsets.

To these companies, however, staying with an approach based only on protecting the individual silos in isolation from each other is no longer valid.

This approach doesn’t provide the necessary degree of visibility and also breeds something troubling: dark data, caused by replication between systems.

Additionally, with the significant increase in data growth and longer retention periods, infrastructure and administrative expense can become excessive if not managed carefully.

The system contains the hidden dangers of over-copying data into protection systems, under-copying it resulting in unprotected data, and failing to meet governance and regulatory needs, leading to excess expense and even financial penalties.

Morality tales

When things go wrong, they do so with spectacular consequences.

  • UK telco KCOM was hit with a £900,000 fine for failing to ensure its emergency service operated correctly.
  • Mobile operator Three was fined £1l9m fine for a UK emergency call handling failure.
  • A “confidential commercial settlement” was reached between Hewlett Packard Enterprise and the Australian government following SAN data loss failures.

What does the new system of data protection and recovery look like?

The data to be protected has to be viewed as a single logical resource, even if physically distributed in various physical silos both on and off premises.

To guarantee full data protection coverage and consistency, this single pool, or data estate, has to be centrally managed, monitored and protected by a system that spans the on-premise and public cloud worlds

Protection policies which can be applied to new data source systems need to be in place to define the type and frequency of basic protection according to recovery point and time needs, replication, and archiving

The system has to provide a portal or its equivalent to view the data and to support compliance with issues such as GDPR.

It has to enable businesses to recover from ransomware attacks and other data loss events. Organisations need to understand the importance of data isolation, highlighting the value of cloud storage as ransomware attacks are increasingly targeting on-premise servers, including data protection systems.

It should enable you to choose the best destination targets for your protection data, bearing in mind recovery times, archival needs, data identification and removal needs, and also eDiscovery and legal holds. Only a central control plane can provide this over-arching ability to unify data protection and provide consistency.

The implementation of this ideal data protection control facility could be on-premises or in the cloud. On-premise does mean potentially increasing costs as the data sets increase in size and you have to buy more hardware. One option would be critical RTO/RPO on premise with the remainder using a deduplicated cloud environment.

When should you change to this centralised overall approach to data protection? That’s a hard call: if you don’t change, nothing bad is likely to suddenly happen. However you may be spending too much money on inadequate and inconsistent protection schemes that will leave you vulnerable

Organisations are becoming more mobile and distributed. People and systems are generating and storing data across the globe. As the perimeters of this estate expand, so the rules governing data protection must evolve.

Only a centralised form of data management that brings greater visibility and control will mitigate the risks. ®

Supported by Druva

More about

TIP US OFF

Send us news