This article is more than 1 year old

How to be certain about your data in an uncertain future

Be a speedboat, not an oil tanker. And don’t let the CEO read Forbes

If it wasn’t for users, managers, or compliance execs, IT would be an easy place with goalposts that stayed put. The real world is far less predictable. The rules of play may change. So how do you design data strategies to cope?

Data regulations are a good example. The EU’s Safe Harbour legislation made the rules clear when it comes to storing personal data outside the EU. Then Snowden revealed that the US was snooping on everyone’s data, causing people to rush to the European courts to question the offshoring of data over the Atlantic. The whole thing unravelled, leaving companies less certain about where they stood than before.

Other regulatory factors may influence what you have to do with your data. Maybe your industry will issue new guidelines. Or your compliance officer will suddenly decide that they have to anonymise everything.

The changes don’t have to be regulatory, either. A merger or acquisition might mean that your IT department eats – or is eaten by – another. Your data may suddenly have to play nicely with someone else’s. Perhaps the CFO went to a conference last weekend and has since decided that hybrid cloud is the way to go, or maybe the CEO read an article about big data and real time analytics in Forbes. You can run when you see them walking towards your office, but you can’t hide.

Jon Cano-Lopez is the CEO at ReAD Group, which builds vast databases of UK consumer data, and his tech team has to think constantly about how to ensure their data architectures can adapt to change.

“If you ever try to sit down and work out a solution, by the time it is delivered, the requirements will have changed slightly,” he said, adding that you have to future-proof the data architectures without endangering sensitive data. “The underlying data structures have to be flexible, as do all the onboarding processes. But you have to do that without taking shortcuts.”

What should you hedge for?

Doing this properly involves some form of risk analysis. You may not know what’s going to happen in the future, but you can at least figure out what’s more likely, and what’s going to have the biggest impact. Look at external pressures on the company, from regulators and elsewhere. What have competitors had to do?

Are companies in your sector driven towards low-latency apps that need data residing closer to the server? Is your competition migrating to the cloud and saving money? What might that mean for you? Another risk to look for is vendor lock-in, warned Artyom Astafurov, senior vice president of M2M at global technology consulting firm DataArt.

“When you’re building a data structure around a platform like Amazon Web Services, you’re highly dependent on their services,” he said, by way of example. “Vendor lock-in is the price you pay for a gain in velocity.” The same is often true when choosing particular hardware vendors for on-premise solutions. Having a list of possible external disruptors and their associated effects on your data will give you a corresponding list of desired characteristics. Being more cloud-friendly may make data portability key, depending on your cloud strategy and the likelihood that it may change, for example. If your business is ‘risk on’ and looking for growth, then maybe your board is anticipating new business products and services that will place new demands on your infrastructure. How strong is your dialogue with them?

After working out the likely changes and the kinds of capabilities that you want in your data architecture and supporting infrastructure, auditing your existing capabilities can help you baseline what you already have and then perform a gap analysis. What stands out as a problem area? Data silos might be an obvious one. Analytics are poised to transform contact centre operations, for example, but for that to work you need visibility, which implies access to data from various sources in the company. If contact centres are an important tool to your business but all your data is stovepiped in different systems and formats, unlocking it could be a focal point for your modernisation roadmap.

Abstract all of the things

One of the biggest barriers for companies trying to make their data architectures and supporting infrastructures less rigid is that the data services and the infrastructure they rely on are still tightly coupled, warned Astafurov. He argues that abstracting one from the other can make it easier to access them programmatically.

When vendors talk about software-defined anything, from storage to whole datacenters, this is typically what they’re on about. The concept promises IT departments the ability to adapt their infrastructure to support new services as necessary. Astafurov takes it one step further, though, advocating services that can be automatically run on standardised, repeatable clusters.

“Resource management and service discovery helps you to separate the infrastructure from the services,” he said, citing tools like Apache Mesos which can manage clusters of machines and run services as containers. “What we are seeing lately (and Mesophere is a good example) is building environments where the payload is a container which is dynamically scheduled to that machine,” he explained.

Separating the data and the services in this way makes it possible to control where and how the data is stored programmatically, typically via APIs. This underpins the cloud computing concept, and can free up companies to begin moving their workloads around in their on-premise infrastructure based on new business requirements, or even farming parts of them out to third party providers in a hybrid cloud arrangement. Until they build this level of abstraction into their systems, that fluidity will be difficult to achieve.

Next page: Data-aware storage

More about

TIP US OFF

Send us news


Other stories you might like