Feeds

Understanding your information assets

What are you are trying to solve?

  • alert
  • submit to reddit

Internet Security Threat Report 2014

Lab Information is power, right? Well, just how powerful must most organisations feel today, given the amount of information they are packing? The immediate irony is that the opposite is generally true – we create data with gay abandon, but our ability to keep tabs on everything that we create is showing itself to be increasingly inadequate. So what’s the answer?

We shouldn’t be too harsh on ourselves of course, as many of the problems around data are beyond anyone’s capabilities to have predicted or indeed solved in advance. For a wide variety of reasons lost in the mists of time, many organisations have highly fragmented data stores containing overlapping and sometimes contradictory data.

We could ‘blame’ the silo-ey nature of IT systems, but silos often exist for good reason, not least that it is actually possible to define a set of requirements for a single system. We might have complained about functional specifications being far too complicated and detailed, but at least they will have been simpler than the (theoretical) specs that tried to do everything for everyone.

Traditional procurement and economic models of IT have also played their part, along with stakeholder management, aka: “Who’s going to pay for the thing?” Again, it is far simpler to get funding and agreement for something that does a reduced set of things well, rather than trying to deliver the ultimate answer to everybody’s needs.

As a result we end up with the kinds of IT environments we see today, with lots of systems, each having their own little pockets of data. Packaged applications were supposed to change all that, of course, but IT rarely keeps up as departments re-organise, and companies merge, divest and change. And so we arrive at situations such as one organisation having eight ERP instances, or as one person told me yesterday, 49 separate directories and other stores of identity information.

The challenges are legion. Data quality remains an issue for many organisations, as information stored months or even years ago contains errors of varying forms, from spelling mistakes to storing important data items in the wrong fields because that was the only option at the time. Accessibility and interoperability are burdensome, pushing the Shangri-La single view of the customer forever just out of reach. Compliance with regulations remains nothing short of onerous, and of course, we have the operational overheads of managing, supporting and backing up the stuff.

Even addressing these issues assumes that you know what data exists in your organisation, and who’s using it for what purpose. After all, if you don’t know what you have, how on earth can you do anything to improve its lot? In today’s increasingly distributed environments, fragmentation is only going to get worse, raising the risk of incidents we’ve seen hit the press such as data leakage.

Vendors would no doubt say that they have solutions to all of these problems, from data discovery and cleansing, through high-end extract-transform-load (ETL) and more specific data leakage prevention tools. But such offerings may only help you tread water, improving on the past and getting things a bit more ship-shape. They don’t fix the underlying lack of control, structure and discipline.

The alternative starting point is to worry first about what information you need, before paying too much attention to what you should be doing with it. Information modelling exercises that start with business activities and their supporting information needs do give a better handle on what should be seen as high-priority information – which then can inform exercises to identify what state that information is in, for example.

All kinds of methodology and tooling exist around such identification and mapping exercises. Enterprise architecture frameworks such as TOGAF expand on the different kinds of information, and technologies exist to discover, map and maintain information models that result. Meanwhile, master data management, for example, is about identifying the key information assets of the organisation, and linking the ‘master’ view of such assets to the different places where they are instantiated – again, various tools exist to support such efforts.

The problem with areas like this, is that they can all too easily become part of the problem too. In this industry we are very good at building Towers of Babel, huge edifices that started out trying to solve a problem that was simple to describe, but easy to over-complicate. Mapping of data assets and discovery, along with classification exercises can end up with over-blown descriptions of every single entity used in the business, whether it’s important or not. Much time can be spent trying to optimise such things, slowed down by ‘committee debates’, as different parts of the organisation put their oar in. As one enterprise architect at an oil company once said to me: “We can’t even agree on what a ‘reserve’ is.”

The way we build and manage IT systems is becoming ever more complex, but the idea of solving complexity with even more complexity does not bode well. We will have to look for simpler ways of accessing, treating and managing information.

Part of the answer involves remembering the key question: “What problem are we trying to solve here?” If holding back the ocean (which makes a change from trying to boil it) is not a long-term strategy, a more important target is what constitutes a good-enough set of solutions to meet the higher priority needs of business users.

Or just maybe methodologies, tools and technologies really will one day cut a swathe through all those pesky repositories, and build a comprehensive view of everything that matters to your business, past and present. While this scenario is unlikely, if everything really could be automated to such an extent, there would be little to differentiate organisations that were good at managing information from those that weren’t. To extrapolate further, IT that ‘just worked’ might very quickly become no more than a utility, offering little in the way of competitive advantage. While today’s environments might be complex and fragmented, as the old adage goes, we should be careful what we wish for. ®

Remote control for virtualized desktops

More from The Register

next story
Microsoft to bake Skype into IE, without plugins
Redmond thinks the Object Real-Time Communications API for WebRTC is ready to roll
Mozilla: Spidermonkey ATE Apple's JavaScriptCore, THRASHED Google V8
Moz man claims the win on rivals' own benchmarks
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
FTDI yanks chip-bricking driver from Windows Update, vows to fight on
Next driver to battle fake chips with 'non-invasive' methods
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Ubuntu 14.10 tries pulling a Steve Ballmer on cloudy offerings
Oi, Windows, centOS and openSUSE – behave, we're all friends here
Apple's OS X Yosemite slurps UNSAVED docs into iCloud
Docs, email contacts... shhhlooop, up it goes
Was ist das? Eine neue Suse Linux Enterprise? Ausgezeichnet!
Version 12 first major-number Suse release since 2009
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.