Big Data: Why it's not always that big nor even that clever
And as for data scientists being sexy, well...
You may not realize it, but data is far and away the most critical element in any computer system. Data is all-important. It’s the center of the universe.
A managing director at JPMorgan Chase was quoted as calling data “the lifeblood of the company.” A major tech conference held recently (with data as its primary focus) included a presentation on how to become “a data-driven organization.”
The Harvard Business Review says “data scientist” will be “the sexiest job of the 21st century.” A separate recent article describing how Netflix is harvesting information about our every gesture, and may transform us from “happy subscribers to mindless puppets”, warned that “the sheer amount of data available to crunch is already phenomenal and is growing at an extraordinary rate.”
Reckless, clueless uses of the term 'Big Data'
All the above quotes come from articles touting, selling or gaping in awe at Big Data, this year’s Mother of All Tech Trends. If you’re a technologist, it’s easy to feel a little inadequate if you’re not singing its praises, which is all the more bewildering because no one seems to know exactly what it is. Well, that’s not quite true. Big Data, strictly speaking, is the product of several forces:
- The vast increase in the quantity of information being collected (and stored, and processed, and analyzed) due to the insatiable appetite of Big Brothers including Google, Facebook and Amazon.
- The heterogeneous nature of this information, which can come from online purchases, Facebook status updates, tweets, shared photos, and check-ins, among other places.
- The demand to crunch these mountains of data as quickly and efficiently as possible.
However, the term seems to get thrown around recklessly and cluelessly more often than not and, even when it’s used appropriately, applied much more widely than is warranted.
The three forces mentioned above are real. Google is trying to suck up every bit of information it possibly can, from whatever source, in an attempt to create profiles on as many people as possible. (This can be for good or evil: from products such as Google Now that “anticipate your needs before you do" to selling you as a package to advertisers — but I won’t get into the moral issues here.)
By definition, that torrent of data from every source in the world is not going to be neat, uniform and rectangular. So, yes, Google probably needs a special set of tools to deal with this data, which may be unlike any data processed in the past in volume and variety.
The best-known of these tools are Hadoop - a non-relational, distributed database framework - and MapReduce - a set of algorithms developed by Google to mash heterogeneous data from multiple sources into a single set of key/value pairs. Using Hadoop and MapReduce, Google can break massive datasets into manageable chunks and process those chunks independently and statelessly on a server farm.
Is it true that this kind of data can’t be managed easily, quickly and without painful pre-processing using a relational database, the designated dinosaur of the Big Data crowd? Possibly.
Google's special needs
Is MapReduce the game-changing data-consolidation technology that its champions claim it is? Almost definitely not: The legitimacy of Google’s patent on the process has been questioned on the grounds that existing products can easily perform the same relatively simple functions. Basic MapReduce examples published on the web consist of a few dozen lines of Java code. There’s nothing particularly revolutionary going on here.
But let’s assume Google requires these tools to meet its very special needs. And let’s assume all existing tools and database frameworks are inadequate for their purposes. That doesn’t mean Big Data is something that (as its proponents claim) nearly every organization running a big-ish computer application has to confront and deal with using new database and software models.
Large quantities of data, even huge quantities of data, are nothing new. In the investment-banking world, high-frequency-trading systems have always had to handle tremendous numbers of transactions at speeds measured to the microsecond; market-data engines that store and process thousands of price ticks per second have existed for years.
Speaking recently to my friend Ken Caldeira, who runs a climate-science lab at Stanford’s Carnegie Institution for Science, I found out, not surprisingly, that he regularly has to deal with “petabytes of data.” Another colleague of mine, a Wall St. quant trained as a physicist who spent several years doing genome work in the 2000s, claims that in his genomics research there were “staggering amounts” of data to analyze.
In the era of Big Data, larger-than-ever datasets are often cited as an issue that nearly everyone has to contend with, and for which the previous generation of tools is practically useless.
But for the most part, Caldeira and my quant friend use… Python scripts and C++. It’s true that many huge data-consumers now make use of massively parallel architecture, clusters, and the cloud, but this move has been going on for more than a decade and, as my quant friend points out, “people confuse doing things in the cloud with what you do in the cloud. Just because the data is in the cloud doesn’t mean you’re doing something different.” Using distributed databases for speed and redundancy makes sense no matter what kind of work you’re doing, given the ever-plummeting cost of hardware.
You're kidding, right ?
"If a company that’s been around for years suddenly argues that it needs Big Data techniques to run its business, it must mean that either [...] or it's been hobbling along forever with systems that don’t quite work. Either of those claims would be hard to believe."
The second is all too believeable, and is keeping me in a job right now ...
It isn't what you've got, its how you use it.
I talked to a recruitment consultant a while ago who pointed out that all the recruitment companies have gone "big data." That is, they do word frequency analysis on CVs and just search on a big pile of "stuff" and take the top CVs on the list.
So now you have to keep repeating keywords, add abbreviations in brackets and that sort of thing to make sure your CV ends up on page 1 of the search results.
They have replaced personal knowledge and relationships with a technical solution which will inevitably lead to poorer quality but greater quantity of words in people's CVs. I'd be surprised if people weren't already using white-on-white text to bump their CV's visibility to the search engine.
By destructuring the data they've increased their storage costs, removed information from the system and now they have to keep tweaking the systems to stop them being gamed. Sending a slightly irrelevant advert to someone is one thing, but making business decisions about personnel suitability based on this stuff is dangerous. The reason we have structure is because it organises data into easily understood information. A word-cloud from a comment box might be fine for an initial analysis of what people are talking about, but it doesn't tell you what they are saying - the data is there, the information has been removed.
Every client I've ever worked for has used one or both of the phrases "We handle a huge amount of data" and "I bet you've never seen it this bad". Almost without exception they're processing a very normal amount of data, sometimes in very inefficient ways. The science behind MapReduce is far more important than that specific technology - often there are equally useful techniques that are better suited to a client's needs, however much they might want to install Hadoop etc.
"You may not realize it, but data is far and away the most critical element in any computer system. Data is all-important. It’s the center of the universe."
If you don't realise it then you probably shouldn't be working in IT.
+1 for sarky quotes. Later I'll buy a hot drink from a "coffee scientist".