Feeds

Open-sourcers promise cloud elephant won't trample your code

Hadoop buffed for 2010 'completion'

High performance access to file storage

ApacheCon 09 Popular grid computing platform Hadoop could grow up next year, with network authentication and features to stop brand-new code breaking users' existing applications.

Also on the roadmap for the open-source architecture - used by Yahoo!, eHarmony, LinkedIn, and Fox Interactive Media among others - is support for dynamic languages and languages other than Java running natively on the client.

Doug Cutting, founder of the Apache Hadoop project, outlined these as goals for his hyper-scale elephant at ApacheCon in Oakland, California, for 2010.

The goals will be delivered in what he hoped will be version 1.0 of Hadoop.

Cutting, and co-presenter and Yahoo! software architect Owen O'Malley, framed the goals in a discussion of what's needed to help Hadoop cross the chasm - move from use by early adopters to uptake by a wider audience.

Hadoop has been in gestation since 2002 as Nutch. Cutting joined Yahoo! in 2006 with Hadoop spun out of Nutch shortly afterwards. The Java distributed computing framework is today in use at massive sites crunching huge amounts of data.

Despite it's strong initial success, issues remain that people have managed to work around or have ignored but that are now beginning to become a problem as users throw more data, processing, and computing power at the framework and as the Hadoop becomes a part of day-to-day computing life.

Some problems have been solved recently. These include the inclusion of a job schedule system so adopters can set and enforce service level agreements on their traffic.

Another source of pain was a fast rate of new release of Hadoop. O'Malley noted many users stuck with version 0.18 and skipped version 0.19. The answer was to slow down the number of release cycles - something that kicked in during Hadoop .2x, with releases now at every nine months.

Challenges remain, and solving them are what Cutting called these the "hallmarks" of Hadoop 1.0.

Security has become an issue Hadoop can no longer ignore as organizations and people put more of their personal data into the grids Hadoop's running. The goal, now, is Kerberos-based network authentication in Hadoop 1.0, used for traffic on unsecured, public neworks. O'Malley cautioned that this is a "big effort" and would take 24 person months indicating Kerberos might not be finished in version 1.0.

Another issue is solving breaking changes - that as new versions of Hadoop are delivered the changes break or don't work with users' existing APIs.

Cutting said one goal of 1.0 is better backwards compatibility that lasts for "a couple of years" and also for compatible remote procedure calls (RPCs). RPCs will let users update just a part of their cluster if they chose, instead of having to update the entire cluster.

Also, the goal is to use RPC so Hadoop can support dynamic languages on the client while, in general, allowing languages to execute natively without need to go through Java.

The answer to breaking changes, the RPC and dynamic language support question is the Avro - a system for data serialization from Cutting's new employer Cloudera.

Avro is expressive. It's small. And it's fast. Under Avro, schema is stored with data but is also factored out of instances. Arbitrary code types can be read and written without generating and loading the code.

Furthermore, Avro includes a file format, textural encoding for data that handles versioning. An Avro RPC framework, meanwhile, is being build that'll talk to native languages, so these languages no longer need to converse with Hadoop through Java.

One thing not likely to be solved in Hadoop 1.0 next year is the main node single point of failure.

O'Malley said it's rare for a node to actually fail - in 15 years, it's never happened once at Yahoo! However, O'Malley noted, it does take three hours for a main note to recover when it crashes, so the situation is not ideal.

O'Malley said it might take up to a year and a half before this problem is solved in Hadoop, but this would depend on how urgent it becomes for someone to actually submit a patch. ®

High performance access to file storage

More from The Register

next story
Seagate brings out 6TB HDD, did not need NO STEENKIN' SHINGLES
Or helium filling either, according to reports
European Court of Justice rips up Data Retention Directive
Rules 'interfering' measure to be 'invalid'
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
USA opposes 'Schengen cloud' Eurocentric routing plan
All routes should transit America, apparently
prev story

Whitepapers

Mainstay ROI - Does application security pay?
In this whitepaper learn how you and your enterprise might benefit from better software security.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.