Feeds

Hadoop: Making Linux gobble big data

Growing penguins need petabytes to feast on

Boost IT visibility and business value

More kinds of data chewing

"As the data in Hadoop becomes more valuable, you will see other forms of computation moving to that data, not just MapReduce," said Collins. Which is funny, considering that the whole mantra of MapReduce was to move computation to data, not the other way around as data processing systems have been doing since the beginning of the computer era in more than six decades ago.

As the Hadoop stack has grown in complexity, the core use cases for the software have expanded, too. Now you can do batch reporting and more sophisticated data processing, and you can also use Hadoop to gather up log files and do real-time systems management. (This is, in fact, where many companies are cutting their teeth on Hadoop before they start using their customer data.) Companies are also using it for content serving and doing real-time aggregates and counters, and oddly enough, Hadoop is becoming a kind of storage controller. "As people use Hadoop for a long time, more of the data gets cold and it starts looking like storage," said Zedlewski.

Looking ahead, Collins said that getting consistency in the Hadoop stack, regardless of who puts together the distro and sells support for it, was going to be a major effort, all brought under the auspices of the BigTop effort. Many components in the Hadoop stack show above have different interfaces and release support levels, which makes it a bit of a nightmare to actually put together a distribution.

You still have to make compromises and choices, and that is not just bad for business customers who don't want to do Hadoop stack integration, but it is also bad for business for Hadoop disties because it increases support costs. There's also a lot of redundancy in the stack components, which only time will shake out. Moreover, HBase has cross-data centre replication, but the underlying HDFS does not. That needs to change not only for the biggest Hadoop users, but for any company that wants a hot site backup of their Hadoop operations. HBase is also expected to get development frameworks to make it more friendly to developers. And because businesses are crazy about security, they want Hadoop to get a more granular security model with access control lists.

The elephant is not exactly wearing a pinstripe suit and wingtips, but it is putting on a pair of khakis and a decent shirt. Unlike many of the Hadoop geeks presenting at the conference, in fact.

The other interesting trend Collins discussed is the underlying hardware. It will soon be common to have a Hadoop host with 40, 64, or 80 cores, and companies are looking at what happens with Hadoop clusters when they move to 10GE or 40GE networks. "One host is now more powerful than what a whole rack of servers was when Google got started," said Collins.

It is also common to have server nodes with 48TB or 60TB of capacity using fat SATA disks. "We even have people running entire Hadoop clusters with just flash," said Collins. Hadoop users are looking at how to make clusters multi-tenant and how server virtualization might fit in to accomplish this and to ease with the underlying management of servers. Companies are interested in low-power X86 processors to boost the node density of their clusters, they want scalable and fault-tolerant Hadoop name nodes, and they are even contemplating how to get MapReduce algorithms to work on GPU coprocessors.

This latter effort is being spearheaded by the oil and gas industry, which already has GPUs in their clusters, said Zedlewski, adding that "this is still a pretty bleeding edge use case". ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.