Original URL: http://www.theregister.co.uk/2013/02/27/revolution_analytics_r_hadoop_integration/

Revolution weaves predictive analytics into Hortonworks Hadoop

Teaching an elephant to prognosticate like a pirate – R!

By Timothy Prickett Morgan

Posted in Cloud, 27th February 2013 01:22 GMT

Revolution Analytics, the commercializer of the open source R statistical analysis programming language with proprietary extensions that are closed source, has come up with a bunch of ways to integrate the Hadoop big data muncher with R. And it is testing the integration of its R Enterprise ScaleR predicative algorithms in conjunction with Yahoo! spinoff Hortonworks, one of the big five – er, now six, including Intel – Hadoop disties.

Back in September 2011, Revolution Analytics announced an R connector for Hadoop, originally called RevoConnectR for Apache Hadoop but now called RHadoop, that allow statisticians familiar with R to do analysis on data stored in the Hadoop Distributed File System (HDFS) or the HBase columnar, non-relational database that rides atop of HDFS.*

With this RHadoop connector, you put the R stats engine on each node in a Hadoop cluster and use it to do the data gathering. As Revolution Analytics VP of marketing and community David Smith explains it to El Reg, you are writing MapReduce routines in R instead of Java, using the same methods you would if all of the data were stored on your workstation.

This functionality, which has been tested against Cloudera CDH3 and CDH4 and IBM BigInsights 2, is used for data distillation work, Smith tells El Reg, such as combing through Twitter feeds to gather up relevant tweets for sentiment analysis.

Sometimes, though, you want to do predictive analytics on a data set, and that is where R Enterprise RevoScaleR comes in. This feature contains the proprietary code developed by Revolution that parallelizes the R engine to make it faster on multithreaded servers, as well as run across clusters.

It also includes 44 predictive analytics routines that are commonly used in conjunction with the R programming language, including predictions for fitted models, generalized linear models, K-Means clustering, and classification or regression tree analysis, among many others. These algorithms run on your workstation and you have to stream the data out of a Hadoop cluster to run them against it.

As a result of Revolution Analytics' work with Hortonworks and its Data Platform 1.2 software, these 44 predictive analytics routines have been ported to Java and C++ and can be distributed to run locally on Hadoop nodes. And now, instead of having to stream a dataset that has been created using MapReduce back to the workstation to use these predictive routines, you can run them right there on the Hadoop cluster itself.

This integration with Hortonworks Data Platform 1.2 is in demonstration mode now, and will be in limited availability for HDP – as well as for Cloudera's CDH3 and CDH4, IBM's BigInsights 2, and the new Intel Distribution for Apache Hadoop – during the third quarter. General availability is expected in the fourth quarter, but it is not expected that Revolution Analytics will support the open source Apache Hadoop stack with this predictive analytics integration.

Revolution Analytics' R Enterprise has a workstation edition that is aimed at a single user on a single workstation PC, which costs $1,000 per machine per year for a license. The server edition, which can be used by an unlimited number of end users, costs $30,000 per year for an eight-core x86 server. So on a Hadoop cluster, that is sure gonna mount up (although there are surely volume discounts, this being the IT racket and all).

The R Enterprise tool has also been certified to run on Amazon EC2 and Microsoft Azure clouds, and that might be a cheaper route if you are only going to do intermittent statistical analysis using R and Hadoop.

Revolution Analytics is privately held and doesn't talk much about its internals, but the company has 55 employees now and over 250 customers. Smith tells El Reg that the company had its best year to date in 2012 and expects to double headcount and revenues in 2013.

At least half of the customers buying its wares have some sort of interaction with Hadoop, so these connectors and algos are important. The company has raised $17.6m in three rounds of funding, with its Series C still open and another chunk coming in soon to boost that number. Intel Capital and North Bridge Venture Partners have kicked in the dough so far.

So what is the big change driving business this year? "Last year, people were trying to figure out how to use the Hadoop infrastructure," says Michele Chambers, an ex-IBMer who just joined the company. "This year, they are building enterprise applications, and we are seeing them doing real stuff on top of Hadoop." ®


* HBase is an open source program based on the ideas behind Google's BigTable unstructured data store, just like how Hadoop and its MapReduce algorithm borrow ideas from Google's back-end for its search engine from many years ago.