Feeds

Big Data bites back: How to handle those unwieldy digits

When you can't just cram it into tables

Maximizing your infrastructure through virtualization

Data is easy. It comes in tables that store facts and figures about particular items – say, people. The columns define the data to be stored about each item (such as FirstName, LastName) and there is one row for each person. Most tabular database engines are relational and we use SQL for querying. So this "Big Data" thang must simply be very, very big tables with lots and lots of rows.

It’s a tempting definition, but inaccurate. Big Data describes a genuinely different class of data. The best definition that I know is more or less a negative one. Big data is any data that: doesn’t fit well into tables and that generally responds poorly to manipulation by SQL.

So we have to find other ways to store it and to analyse it. To understand why, think about what tables and SQL do well. The tables can store relatively complex data about very large numbers of very similar items.

In SQL, the SELECT clause lets you choose the columns you want to see, in other words it lets you subset the table by columns. The WHERE clause lets you choose the rows. In other words, SQL is very good at sub setting data. It can, of course, do more. It can join tables, it can summarise (GROUP BY) but essentially it is designed to go into a large set of well-organised data, extract a subset and present it to you in an answer table.

In direct contrast, Big Data is pretty varied in structure and, even if we do cram it into a table, the analysis we run against it isn’t usually sub setting.

Picture it

The easiest way to illustrate this is by example. Imagine a digitised X-ray image, perhaps a JPEG file. You want to analyse it algorithmically, looking for bright spots of a particular size, shape and intensity. Or, if this sounds too tame, think about scanning satellite images for little cruciform and deltoid shapes.

You can, of course, tabularise an image file, creating one row for each pixel and columns for X position, Y position, intensity and so on. One problem is that you end up with a narrow, very deep table which is unwieldy. In terms of analysis, SQL can very easily find the bright pixels, however it is a very poor tool for deciding which groups of rows represent all the pixels in a single spot.

“But,” you cry, “I’d use a User Defined Function for that!” Yes, so would I. In my experience all data can be squeezed into a table and analysed in a relational database system, but at some point the effort required makes you think about other, more suitable, containers and alternative analytical languages.

Given that we are defining Big Data as “Not tabular” we aren’t saying that all Big Data is similar in structure. So, in a diagram of all possible data, there is a subset where the structure is well-defined (tabular data) and there is the rest – which we are now calling Big.

This name itself comes from the volume, which is usually huge and that brings us neatly to the three “V”s which are often used to characterise Big Data:

  • Volume: Big Data often appears in huge volumes – think terabytes and petabytes
  • Velocity: It tends to come at you very fast - think Twitter feeds
  • Variety (of structure): see above

I have no (particular) problem with these three Vs, I’ve even seen some additions:

  • Value: if it isn’t valuable, why are you storing and analysing it?
  • Veracity: It has to be accurate otherwise your analysis is worthless

But I will admit to being slightly sceptical of definitions driven by the desire for absolute alliteration.

So, for me, despite the name, the most important feature of Big Data is its structure, with different classes of big data having very different structures.

With that definition, we can start to look at examples. A Twitter feed is Big Data; the census isn’t. Images, graphical traces, Call Detail Records (CDRs) from telecoms companies, web logs, social data, RFID output can all be Big Data. Lists of your employees, customers, products are not.

So how can you store and manipulate Big Data? The answer depends on the structure of your particular flavour but take a look at the large – and increasing – number of NoSQL databases systems out there, for example Cassandra, CouchDB and MongoDB.

Ultimately, it is worth remembering that Big Data and its associated database systems are not in competition with existing relational systems. The analysis of tabular data is not going away, but it was only ever part of the story.

In the 1970s and '80s we tackled tabular data because it is common and (relatively) easy to store and manipulate. I say "relatively easy" because it took us at least 30 years to develop a good understanding of tabular data and transactions.

Big Data has always been there; we just couldn’t process it very well. That’s now changing and we are finally taking on the much harder – but very rewarding and lucrative – job of tackling it. It’s a big job. ®

Mark Whitehorn holds the chair of analytics at the University of Dundee. His role involves working on data output from mass spectrometers, two-dimensional graphical traces of three-dimensional peaks that must be detected and their volumes calculated. The trick isn’t to do the sums; it’s to do them rapidly because another 8Gbyte output file is always coming.

The Power of One eBook: Top reasons to choose HP BladeSystem

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
The triumph of VVOL: Everyone's jumping into bed with VMware
'Bandwagon'? Yes, we're on it and so what, say big dogs
Carbon tax repeal won't see data centre operators cut prices
Rackspace says electricity isn't a major cost, Equinix promises 'no levy'
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Application security programs and practises
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Securing Web Applications Made Simple and Scalable
Learn how automated security testing can provide a simple and scalable way to protect your web applications.