Big Data: Why it's not always that big nor even that clever
And as for data scientists being sexy, well...
You may not realize it, but data is far and away the most critical element in any computer system. Data is all-important. It’s the center of the universe.
A managing director at JPMorgan Chase was quoted as calling data “the lifeblood of the company.” A major tech conference held recently (with data as its primary focus) included a presentation on how to become “a data-driven organization.”
The Harvard Business Review says “data scientist” will be “the sexiest job of the 21st century.” A separate recent article describing how Netflix is harvesting information about our every gesture, and may transform us from “happy subscribers to mindless puppets”, warned that “the sheer amount of data available to crunch is already phenomenal and is growing at an extraordinary rate.”
Reckless, clueless uses of the term 'Big Data'
All the above quotes come from articles touting, selling or gaping in awe at Big Data, this year’s Mother of All Tech Trends. If you’re a technologist, it’s easy to feel a little inadequate if you’re not singing its praises, which is all the more bewildering because no one seems to know exactly what it is. Well, that’s not quite true. Big Data, strictly speaking, is the product of several forces:
- The vast increase in the quantity of information being collected (and stored, and processed, and analyzed) due to the insatiable appetite of Big Brothers including Google, Facebook and Amazon.
- The heterogeneous nature of this information, which can come from online purchases, Facebook status updates, tweets, shared photos, and check-ins, among other places.
- The demand to crunch these mountains of data as quickly and efficiently as possible.
However, the term seems to get thrown around recklessly and cluelessly more often than not and, even when it’s used appropriately, applied much more widely than is warranted.
The three forces mentioned above are real. Google is trying to suck up every bit of information it possibly can, from whatever source, in an attempt to create profiles on as many people as possible. (This can be for good or evil: from products such as Google Now that “anticipate your needs before you do" to selling you as a package to advertisers — but I won’t get into the moral issues here.)
By definition, that torrent of data from every source in the world is not going to be neat, uniform and rectangular. So, yes, Google probably needs a special set of tools to deal with this data, which may be unlike any data processed in the past in volume and variety.
The best-known of these tools are Hadoop - a non-relational, distributed database framework - and MapReduce - a set of algorithms developed by Google to mash heterogeneous data from multiple sources into a single set of key/value pairs. Using Hadoop and MapReduce, Google can break massive datasets into manageable chunks and process those chunks independently and statelessly on a server farm.
Is it true that this kind of data can’t be managed easily, quickly and without painful pre-processing using a relational database, the designated dinosaur of the Big Data crowd? Possibly.
Google's special needs
Is MapReduce the game-changing data-consolidation technology that its champions claim it is? Almost definitely not: The legitimacy of Google’s patent on the process has been questioned on the grounds that existing products can easily perform the same relatively simple functions. Basic MapReduce examples published on the web consist of a few dozen lines of Java code. There’s nothing particularly revolutionary going on here.
But let’s assume Google requires these tools to meet its very special needs. And let’s assume all existing tools and database frameworks are inadequate for their purposes. That doesn’t mean Big Data is something that (as its proponents claim) nearly every organization running a big-ish computer application has to confront and deal with using new database and software models.
Large quantities of data, even huge quantities of data, are nothing new. In the investment-banking world, high-frequency-trading systems have always had to handle tremendous numbers of transactions at speeds measured to the microsecond; market-data engines that store and process thousands of price ticks per second have existed for years.
Speaking recently to my friend Ken Caldeira, who runs a climate-science lab at Stanford’s Carnegie Institution for Science, I found out, not surprisingly, that he regularly has to deal with “petabytes of data.” Another colleague of mine, a Wall St. quant trained as a physicist who spent several years doing genome work in the 2000s, claims that in his genomics research there were “staggering amounts” of data to analyze.
In the era of Big Data, larger-than-ever datasets are often cited as an issue that nearly everyone has to contend with, and for which the previous generation of tools is practically useless.
But for the most part, Caldeira and my quant friend use… Python scripts and C++. It’s true that many huge data-consumers now make use of massively parallel architecture, clusters, and the cloud, but this move has been going on for more than a decade and, as my quant friend points out, “people confuse doing things in the cloud with what you do in the cloud. Just because the data is in the cloud doesn’t mean you’re doing something different.” Using distributed databases for speed and redundancy makes sense no matter what kind of work you’re doing, given the ever-plummeting cost of hardware.
Sponsored: Benefits from the lessons learned in HPC