Original URL: http://www.theregister.co.uk/2010/05/17/doug_cutting_hadoop/

Hadoop takes Big Data beyond Java

Stuffed elephant mates with Python

By Gavin Clarke

Posted in Software, 17th May 2010 21:12 GMT

Apache Hadoop, the open-source version of Google's MapReduce architecture named after a kid's stuffed elephant, is still working to tame the 1.0 beast.

Only a year ago, Hadoop hit its first stable release, a release that - in true open-source fashion - couldn't be called version 1.0 but contained two decimal points: version 0.20.0.

Despite that, Hadoop is running search on some of the internet's largest sites. Lovers on eHarmony, job seekers on LinkedIn and social networkers on Fox Interactive Media - Rupert Murdoch's media arm running MySpace, Photobucket, and Rotten Tomatoes - are getting their queries answered thanks to Hadoop.

During a recent interview with The Reg, Hadoop co-founder Doug Cutting confessed his surprise at Hadoop's level of uptake and success. "I started Nutch [Hadoop's precursor] trying to think about web search ... I was trying to be provocative," he said.

"I didn't see us outside the production environment of building these big web indexes."

And yet, a year since that first stable version and months after Cutting discussed Hadoop 1.0 at ApacheCon last year, version 1.0 - promised for 2010 - is proving elusive.

The project is working towards a goal that's a must for any piece of technology that wants to be taken seriously in business: the ability to upgrade without injecting breaking changes, changes that force users to re-install their software or cause data loss. In Hadoop's case, the goal is to let users upgrade parts of a data center cluster without uprooting the whole thing.

Different priorities

Cutting says that his former employer, Yahoo!, a huge sponsor and early fan whose support made Hadoop possible for others, isn't helping. Yahoo! remains Hadoop's single largest contributor, and that's a challenge, Cutting says, because Yahoo! has a slightly different focus than the rest of the project.

The web's second largest search property, he says, has been working on security updates so people can share large clusters in private, without others knowing what they're doing.

While security is a goal of Hadoop. 1.0, Cutting says that Yahoo!'s focus comes at the expense of work on a broader front. "That's taken some of the steam of the 1.0 efforts," Cutting said. "Hadoop 1.0 has not made as much progress as we'd have hoped."

It's an interesting twist.

Yahoo!'s intervention in Hadoop proved decisive in the early days, helping boost the project to success. With Google thrashing the company in the 2000s, Yahoo! clearly saw something in Hadoop that few others had noticed. Yahoo! saw it not just as web search indexing project, but as an architecture that could handle distributed number crunching for all sorts of services.

That architecture is based on Google's own distributed file system (GFS) and MapReduce. Before GFS and MapReduce, Cutting had built a full-scale web search engine called Nutch, something that started in 2002. However, things hit a wall. "We had something that kind of worked and was in theory scalable to the entire web, but was very painful to use on anything more than 100 million web pages," Cutting said.

From Nutch to Hadoop

He mimicked GFS and MapReduce to break up large chunks of data into small pieces and search them quickly across thousands of servers, building an implementation using open source. Again, it worked - to a point. "We could do demos on 20 machines and actually get some work done, but it wasn't ready to scale to thousands of machine and it wasn't horribly reliable," Cutting said. "This reliability thing was really hard work."

It was then Yahoo! that stepped in, offering the engineers and servers needed to iron out the problems. But Yahoo! had found another use for Hadoop: to quickly analyze huge piles of data distributed in silos of servers and web properties. With Yahoo!'s vice president of Hadoop software development Eric Baldeschwieler, Cutting split out the distributed computing part of Nutch and put it into Hadoop.

Cutting said researchers in Yahoo! wanted to get access to lots of data sets for things like ads served and web server loads. "If you were a researcher in Yahoo! asking how to make ads more relevant, you didn't have all the data in one place," he said. "They started pulling data together in one place to get some early users - and they loved it."

Suddenly, Yahoo! was quickly analyzing ever-changing data on its pages to making updates in hours that had previously taken weeks, and it was shuffling ads around to follow the latest click traffic.

"What it's all about is getting people a handle on running computation on terabytes of data and getting an answer back in a small amount of time reliably," Cutting said.

With Yahoo! focused on solving cluster security, Cutting is still pushing Hadoop forward and trying to crack the problem of breaking changes. Also he wants to make take Hadoop a step further attracting non-Java developers. He's tackling both through the Avro project.

Beyond Java

Avro is a format for data interchange intended to let applications call and process data after the application has been updated or changed. Also, the goal is for applications to be written for Hadoop in languages other than Java and to let Hadoop support native MapReduce and HDFS clients in languages like Python, C, and C++.

Meanwhile, Cutting has followed other open sourcers by joining a company that's trying to sell support and services to customers using his pet technology. He joined Cloudera in August 2009. Despite Hadoop's use at some of the largest sites online, Cutting believes Hadoop is good if you're running just 20 node clusters and that it's easier than running a database server to crunch huge piles of data. Cloudera customers include NetFlix and Samsung.

And if you don't want to run Hadoop yourself, you can deploy on cloud providers like Amazon and Rackspace that are running Hadoop. "It's a little harder than spread-sheet programming but there are tools that are making it simpler," Cutting re-assured us. "The whole goal is to make it fairly simple from the outside and keep the complexity inside."

Cutting may never have planned for where Hadoop is today, but he's not letting delays to version 1.0 obstruct its future either.®