Feeds

BusinessWeek novel turns Google's cloud into epic hero

Search giant invented science and the future

Choosing a cloud hosting partner with confidence

Comment In a rather desperate bid to attract wealthy technology advertisers, BusinessWeek lowered itself this month by publishing data center erotica.

The business publication issued an immense cover story titled: Google and the Wisdom of Clouds. The piece covers Google and IBM's creation of a cluster for use by students and researchers. The two companies announced the cluster way back in October, publicizing their efforts to nudge coders toward parallel programming techniques.

BusinessWeek's story, while colorful and sometimes informative, borders on the delusional.

At its core, the piece hangs Google's "cloud computing" approach on this single cluster. You're meant to understand that the cluster points to Google's future where the company may - or may not - give outsiders access to its data centers in much the same way that Sun Microsystems, Amazon.com, Salesforce and others do today. Way beyond that concept, however, you're told that Google has pioneered a new method of giving students and researchers extra horsepower - a feat that may lead to amazing discoveries and a general peace on Earth.

Have we gone too far? Judge for yourself.

In building this machine, Google, so famous for search, is poised to take on a new role in the computer industry. Not so many years ago scientists and researchers looked to national laboratories for the cutting-edge research on computing. Now, says Daniel Frye, vice-president of open systems development at IBM, "Google is doing the work that 10 years ago would have gone on in a national lab."

The story's author Stephen Baker has an annoying habit of going back and forth between the cluster and Google's grand cloud - blech - vision and confusing the two ideas as one. So, let's try and dodge that issue by separating out the relevant bits.

Google invents national labs. Oh wait

First off, Google and IBM have supplied a few parties with access to a "large cluster of several hundred computers that is planned to grow to more than 1,600 processors," according to the two companies.

So, we're talking about something that any university or corporate customer could buy from IBM, HP, Dell or Sun Microsystems with a few clicks on a web site. Universities and research labs have spent years building similar clusters all on their own as well and can tap into far larger systems today.

Google and IBM have then outfitted the hardware with Linux, the Xen hypervisor and Apache's Hadoop software, which is an open source take on the MapReduce and Google File System (GFS) code used by Google. Yahoo! is now the largest corporate backer of Hadoop. As mentioned, this software helps teach programmers how to spread their jobs across hundreds and thousands of machines.

Without question, Google is doing some pioneering work in the parallel software field. The suggestion, however, that students and scientists have access to something new as a result is ludicrous.

Yes, the national labs have in the past led crucial computing efforts. But how could IBM's Frye forget his company's own work or that of, say, HP, Sun, Microsoft, Hitachi, DEC, Cray - the list goes on and on and on. There has always been a mix of public and private computer science work, and, in fact, much of that work has been with open source software and open networking protocols.

Is Google doing work that may have taken place at a national lab? Of course. Have national labs and other vendors given up on this type of work too? Er, no. To portray Google and IBM as unique godsends here is just wrong.

What's even more hilarious is Baker's suggestion that Google's cluster is "changing the nature of computing and scientific research." Computer scientists and researchers have been the biggest users of shared clusters and have led much of the work around parallel programming. This is all very commonplace stuff to them.

Lastly, Baker fails to mention some key words like mainframe and time-sharing in his cover story. It took Google CEO Eric Schmidt - in a separate piece - to remind the author that these concepts are decades old. But why bother pointing that out when you can make the need for more horsepower in academia seem like a problem that only Google can solve?

Many [students] were dying for cloud knowhow and computing power - especially for scientific research. In practically every field, scientists were grappling with vast piles of new data issuing from a host of sensors, analytic equipment, and ever-finer measuring tools. Patterns in these troves could point to new medicines and therapies, new forms of clean energy. They could help predict earthquakes. But most scientists lacked the machinery to store and sift through these digital El Dorados.

Who knew?

And now to the cloud.

Remote control for virtualized desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.