This article is more than 1 year old

Hadoop's little buddy Nutch 2.0 gulps down web's big data

Apache projects united

Hadoop daddy Doug Cutting's Nutch, the open-source web-search engine written in Java, has been updated to crawl through piles of big data on the web.

Apache Software Foundation (ASF) has released Nutch 2.0 featuring a data abstraction technique that plugs into big-data stores and frameworks Apache Accumulo, Avro, Cassandra, HBase and, yes, the Hadoop Distributed File System (HDFS).

The abstraction layer that was employed is yet another Apache project, Gora – a framework that provides an in-memory data model and persistence layer for big data.

Gora works with NoSQL column stores, key value stores and document stores, as well as with RDBMSes.

The ASF website where Gora makes its home states its goal as becoming "the standard data representation and persistence framework for big data".

Nutch 2.0 also builds on the Apache open-source search server Soir, which adds a crawler, and a link-graph database with parsing support handled by the Apache Tika project.

Cutting wrote Nutch in 2003 with Mike Cafarella, while the pair were also developing Hadoop – using Google's MapReduce distributed data processing framework to make Hadoop system work at scale. Cutting also wrote Lucene, but it was Hadoop that made his name and he was brought in by Yahoo! to implement the system on its servers.

Nutch has since been somewhat eclipsed by Hadoop, which is used by Amazon.com, Facebook and Yahoo! to name just three web giants. Search engines written using Nutch include Krugle and mozDex. ®

More about

TIP US OFF

Send us news


Other stories you might like