Original URL: https://www.theregister.com/2013/08/05/unstructured_data/

There's a tide of unstructured data coming - start swimming

Or you could just work out a plan...

By Adrian Bridgwater

Posted in Databases, 5th August 2013 08:34 GMT

Whether you prefer to define the known size of our planet’s total digital universe in petabytes or even zettabytes, we can all agree the collective weight of data production is spiralling ever upwards.

While we focus on the relative merits of transactional versus analytical databases, the unstructured data that fails to fall within the general purview of either these systems is the rising tide beneath.

We are not just talking about non-textual audio, video and graphical data here. Unstructured data must also be thought of in its textual form of Word documents, emails, social media messages and other as yet undefined data shapes.

Different stakeholders view structured and unstructured differently. After all, in the world of video production it does not necessarily follow that all video data will be structured to those companies working with it.

Equally, textual information held in Word or other word processing applications may be regarded as unstructured if it does not align with the structure or access method of the database in which it is housed.

Unstructured data is defined by a combination of the data’s structure, the database or container structure holding the data, and the access method used to reach the data.

Without some form of reference, data value plummets like a stone

Love me, love my data

So how do we build procedures and policies for managing unstructured data? Just how swollen is the rising tide and where are the undertows that can suck us under?

How do we learn to love the new world of structured and unstructured data and live with both?

Do we need to exercise some almost chaos-theory like aptitude for data agility to get through? Would it be wise to hold unstructured data in a structured database but access it via unstructured methods?

The fact is that context will always rank as ace high, says Rob Bamforth, principal analyst at research firm Quocirca. He argues that without some form of reference, data value plummets like a stone.

“This context has to be applied to the data as stored (in the form of metadata, tags, or anything to provide some context that can be built upon), otherwise it is applied when accessed, even by unstructured methods,” he says.

“For example, a Google search may appear as a complete open search of all the unstructured data on the internet, but in reality it is the specific product of the search and ranking algorithms used.

"Plus we need to factor in how far and fast the web spiders have trawled the data that appears to be available at any moment.”

Staying with the Google example, we need to remember that Google typically determines the value of the data rather more often than the content provider, such as the journalist writing this, for example.

Define the context

Whoever defines the context adds the value to the data – and it could also come from how different forms of data are combined.

As another example, if a government agency were to combine sufficient quantities of essentially public and shared data in such a way that its value increases dramatically and becomes secret intelligence, then once again we have brought structure to bear upon chaos.

So is the unstructured data tsunami is out of control?

In a recent survey carried out by Unisphere and MarkLogic, 86 per cent of respondents said unstructured data is important to their organisation, but only 11 per cent had clear procedures and policies for managing it.

Andrew Anderson, CEO of information stream company Celaton, suggests that forward-thinking organisations are starting to use artificial intelligence and automation in making sense of unstructured data.

“Those who are still relying on human interpretation will be trying to stay afloat on the unstructured data tsunami with one hand tied behind their back,” he says.

Mountains of insight

Adrian Simpson, chief innovation officer at SAP UK, suggests similar automation intelligence for roles in business processes such as recruitment.

“Having a system in place that can understand a candidate’s CV without the need for human intervention is crucial. It is important to have access to this unstructured information but in a controlled environment to avoid littering databases with mountains of insight,” he says.

We start to see that structuring unstructured data for its own sake is both a waste of time and almost symbolic of some kind of big data science experiment.

This is a view echoed by Tibco Chief technology officer Matt Quinn, who believes we must question how we are going to use the insights gleaned from unstructured data. Do we need the insight in real time? Will the insight be wasted in six months if we wait that long?

“The approach I often suggest is use lightweight processing of unstructured information to add important context to structured and actionable real-time data,” he says.

“For example, correlating point-of-sale transactions with social feeds can provide great insight into how a consumer felt about the company and the product – without breaking the bank.”

Genius or idiocy?

Quinn warns we must also consider the reverse if we are indexing and searching unstructured information without understanding relevance and context.

Was the document or data created by someone who is considered to be a thought leader or an idiot? Once again it comes back to context.

The digital universe of western Europe will double every two and a half years

EMC conducted a Digital Universe study with IDC at the end of last year entitled Extracting Value from Chaos. This estimates that the digital universe of western Europe will grow from 538 exabytes to 5.0 zettabytes between 2012 and 2020 – more than 30 per cent a year.

That means it will double about every two and a half years. But Chris Roche, EMEA chief technology officer at Pivotal, cites projections that 45 per cent of western Europe’s digital universe in 2020 could still be useful if tagged and analysed correctly.

So even if we vaguely know what we should be doing to structure our unstructured data, how do we do it and what tools should we use?

More specifically, isn’t it important to have a granular discussion as to what type of database (and indeed, database management environment) we should use?

Whether it is structured, unstructured or even semi-structured data we have at hand, John Glendenning, vice-president of Apache Cassandra distributor DataStax, argues that the ability of NoSQL to tackle this need is nearly always better than a relational database management system (RDBMS) such as Oracle.

“To cope with the huge volume and variety of data that can be coming into a business, flexible or dynamic schema design is required to accommodate all the formats of big data applications, including structured, semi-structured and unstructured data,” he says.

"In Cassandra, data can be represented via column families that are dynamic in nature and accommodate all modifications online.

“For businesses that track unstructured data such as social media entries, or every interaction that a user has with an online video or movie, the amount of data tracked for one user might equate to only a handful of interactions versus another user who has hundreds.

“Now, there are ways of modeling this in an RDBMS, but they don’t come out as clean as they do in a NoSQL database, which allows you to have rows in the same table that have wildly different numbers of columns and data types.”

Intelligence quotient

Stemming the unstructured data tsunami is all about intelligence in data framework design (possibly the artificial type too). It is all about content and also all about data model flexibility.

We know that a huge amount of unstructured data is spam, so a re-engineering of the way data is treated by users' inboxes may be needed. This commonsense approach, along with de-duping and data mining, will also help.

But we need to exercise caution. Remember what Einstein said: “Everything should be made as simple as possible, but no simpler.”

We can try and strip down our unstructured data all we like, but if we go too far we will ultimately lose the context for clarity we first sought. To face the unstructured data tsunami, best learn to swim. ®