Feeds

Using Hadoop for data on Google's cloud? Google would rather you didn't

And it's got just the replacement for it: a shiny 'Google Cloud Storage Service'

Remote control for virtualized desktops

Google wants to shift heavy users of its cloud services away from an open-source, community-developed filesystem and into its own proprietary Colossus tech.

The upgrade was announced by the web overlord in a blog post on Tuesday that announced admins could now store Hadoop-destined data directly in Google's closed-sourced Colossus-based "Google Cloud Storage Service", and threw mud at the traditional Hadoop File System (HDFS) plugin.

The service, we're told, provides a more efficient connector between Google's cloud storage and compute services, and represents another advance in the Chocolate Factory's rent-a-server infrastructure which competes with Amazon Web Services and Windows Azure.

Hadoop is an open-source data analysis platform based on ideas outlined in the Google File System and Map Reduce papers which came out of Google in the early 2000s.

Since Hadoop's genesis at Yahoo! in the 2000s it has become a standard component of any data analyst's open-source toolkit, and its development is stewarded by companies including Cloudera and Hortonworks.

Google, though, would prefer it if users of its cloud opted for the closed-source Colossus-based Google Cloud Storage. To tempt them over to the system, it has listed some of the benefits of using Colossus over HDFS. These benefits, according to Google, include "no storage management overhead", "high data availability", and "quick start up."

Colossus has multiple master nodes which gets around some of the redundancy problems that bedevil early HDFS implementations. It also uses Reed-Solomon erasure codes to perform error correction which, Google says, "achieve similar resilience to failures compared to replication, though with less storage overhead."

Developers should bear in mind that using the cloud storage service locks them further into Google's own idiosyncratic way of doing things and pushes them further away from the main filesystem of the open-source large-scale data community. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?