Feeds

Big Data bites back: How to handle those unwieldy digits

When you can't just cram it into tables

Internet Security Threat Report 2014

Data is easy. It comes in tables that store facts and figures about particular items – say, people. The columns define the data to be stored about each item (such as FirstName, LastName) and there is one row for each person. Most tabular database engines are relational and we use SQL for querying. So this "Big Data" thang must simply be very, very big tables with lots and lots of rows.

It’s a tempting definition, but inaccurate. Big Data describes a genuinely different class of data. The best definition that I know is more or less a negative one. Big data is any data that: doesn’t fit well into tables and that generally responds poorly to manipulation by SQL.

So we have to find other ways to store it and to analyse it. To understand why, think about what tables and SQL do well. The tables can store relatively complex data about very large numbers of very similar items.

In SQL, the SELECT clause lets you choose the columns you want to see, in other words it lets you subset the table by columns. The WHERE clause lets you choose the rows. In other words, SQL is very good at sub setting data. It can, of course, do more. It can join tables, it can summarise (GROUP BY) but essentially it is designed to go into a large set of well-organised data, extract a subset and present it to you in an answer table.

In direct contrast, Big Data is pretty varied in structure and, even if we do cram it into a table, the analysis we run against it isn’t usually sub setting.

Picture it

The easiest way to illustrate this is by example. Imagine a digitised X-ray image, perhaps a JPEG file. You want to analyse it algorithmically, looking for bright spots of a particular size, shape and intensity. Or, if this sounds too tame, think about scanning satellite images for little cruciform and deltoid shapes.

You can, of course, tabularise an image file, creating one row for each pixel and columns for X position, Y position, intensity and so on. One problem is that you end up with a narrow, very deep table which is unwieldy. In terms of analysis, SQL can very easily find the bright pixels, however it is a very poor tool for deciding which groups of rows represent all the pixels in a single spot.

“But,” you cry, “I’d use a User Defined Function for that!” Yes, so would I. In my experience all data can be squeezed into a table and analysed in a relational database system, but at some point the effort required makes you think about other, more suitable, containers and alternative analytical languages.

Given that we are defining Big Data as “Not tabular” we aren’t saying that all Big Data is similar in structure. So, in a diagram of all possible data, there is a subset where the structure is well-defined (tabular data) and there is the rest – which we are now calling Big.

This name itself comes from the volume, which is usually huge and that brings us neatly to the three “V”s which are often used to characterise Big Data:

  • Volume: Big Data often appears in huge volumes – think terabytes and petabytes
  • Velocity: It tends to come at you very fast - think Twitter feeds
  • Variety (of structure): see above

I have no (particular) problem with these three Vs, I’ve even seen some additions:

  • Value: if it isn’t valuable, why are you storing and analysing it?
  • Veracity: It has to be accurate otherwise your analysis is worthless

But I will admit to being slightly sceptical of definitions driven by the desire for absolute alliteration.

So, for me, despite the name, the most important feature of Big Data is its structure, with different classes of big data having very different structures.

With that definition, we can start to look at examples. A Twitter feed is Big Data; the census isn’t. Images, graphical traces, Call Detail Records (CDRs) from telecoms companies, web logs, social data, RFID output can all be Big Data. Lists of your employees, customers, products are not.

So how can you store and manipulate Big Data? The answer depends on the structure of your particular flavour but take a look at the large – and increasing – number of NoSQL databases systems out there, for example Cassandra, CouchDB and MongoDB.

Ultimately, it is worth remembering that Big Data and its associated database systems are not in competition with existing relational systems. The analysis of tabular data is not going away, but it was only ever part of the story.

In the 1970s and '80s we tackled tabular data because it is common and (relatively) easy to store and manipulate. I say "relatively easy" because it took us at least 30 years to develop a good understanding of tabular data and transactions.

Big Data has always been there; we just couldn’t process it very well. That’s now changing and we are finally taking on the much harder – but very rewarding and lucrative – job of tackling it. It’s a big job. ®

Mark Whitehorn holds the chair of analytics at the University of Dundee. His role involves working on data output from mass spectrometers, two-dimensional graphical traces of three-dimensional peaks that must be detected and their volumes calculated. The trick isn’t to do the sums; it’s to do them rapidly because another 8Gbyte output file is always coming.

Beginner's guide to SSL certificates

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.