Original URL: http://www.theregister.co.uk/2008/02/05/ibm_bluegene_web/

IBM explores 67.1m-core computer for running entire internet

Ruby on Rails on Rails on Rails on Rails

By Ashlee Vance

Posted in Servers, 5th February 2008 23:37 GMT

Exclusive We'll hand it to IBM's researchers. They think big - really big. Like holy-crap-what-have-you-done big.

The Register has unearthed a research paper that shows IBM working on a computing system capable "of hosting the entire internet as an application." This mega system relies on a re-tooled version of IBM's Blue Gene supercomputers so loved by the high performance computing crowd. IBM's researchers have proposed tweaking the Blue Gene systems to run today's most popular web applications such as Linux, Apache, MySQL and Ruby on Rails.

The IBM paper rightly points out that both large SMP (symmetric multi-processing) systems and clusters have their merits for massive computing tasks. Of late, however, most organizations looking to crunch through really big jobs have preferred clusters, which provide certain economic advantages. Customers can buy lots of general purpose hardware and networking components at a low cost and cobble the systems together to equal or surpass the performance of gigantic SMPs.

Sun Microsystems, Amazon.com, Google and Microsoft stand as just some of the companies using these clusters to offer software, processing power and storage to other businesses. Their customers tap into these larger systems and can "grow" their applications as needed by firing up more and more of the provided computing infrastructure.

But there are a few problems with this approach, including the amount of space and energy the clusters require. So, IBM wants to angle Blue Gene boxes at the web software jobs, believing it can run numerous applications on a single box at a lower cost than a cluster.

"We hypothesize that for a large class of web-scale workloads the Blue Gene/P platform is an order of magnitude more efficient to purchase and operate than the commodity clusters in use today," the IBM researchers wrote.

Under a project code-named 'Kittyhawk,' IBM has started running new types of applications on Blue Gene. For example, it has run the SpecJBB benchmark for testing Java performance and the LAMP (Linux Apache MySQL Perl/Python) software, finding comparable performance to today's clusters.

Blue Innards

IBM's unique Blue Gene design has attracted a lot of attention from national labs and other major HPC customers. In fact, four of the 10 fastest supercomputers on the planet rely on the Blue Gene architecture, including the world's fastest machine: the Blue Gene/L at Lawrence Livermore National Laboratory.

The newer Blue Gene/P system combines hundreds and thousands of low-power processor cores to make a single box. A typical configuration would include four 850MHz PowerPC cores arranged in a system-on-a-chip model with built-in memory and interconnect controllers. You can take 32 of these "nodes" and pop them onto a card. Then you have 16 of those cards slot into a midplane. Each server rack has two midplanes, leaving you with 1024 nodes and 2TB of memory. In theory, you can connect up to 16,384 racks, providing up to 67.1m cores with 32PB of memory. That'll get some work done.

Each rack boasts IO bandwidth of 640Gb/s, which puts our theoretical system at 10.4Pb/s.

The architecture of Blue Gene gives IBM a so-called "hybrid" approach, according to the researchers, where they can get the best of both the SMP and cluster worlds.

"The key fact to note is that the nodes themselves can be viewed for the most part as general purpose computers, with processors, memory and external IO," they wrote. "The one major exception to this is that the cores have been extended with an enhanced floating point unit to enable super-computing workloads."

So, you're working with systems in a sense very similar to the individual x86 boxes that make up most clusters. Although the unique packaging of the Blue Gene systems along with their low-power cores allows IBM to create a more reliable computer - by more than two orders of magnitude - than commodity boxes which fail all the time.

To date, IBM's Blue Gene systems, which obviously have remarkable scale, have been aimed at running a single job well across the entire box.

But now we're on to flashier stuff.

To run, er, the entire internet, IBM looks to craft more flexible systems. To that end, the researchers have presented their case for splitting software jobs across a Blue Gene computer, putting in track-able administrative controls, adding sophisticated error checks and booting software over the network.

Interested parties can find the gory software details in the report (PDF), although we'll summarize by saying that IBM is making heavy use of Linux, a hypervisor microkernel, network-based management, software appliances and a quasi-stateless approach.

In sample jobs done with prototype systems, IBM thinks they perform pretty well.

We experimented with Web 2.0 applications that are typically constructed from a LAMP stack (that is Linux, Apache, MySQL and PHP). We package the PHP business logic and Apache webserver in a 20MB appliance. By separating the database from the rest of the application stack, the nodes remain stateless. It is interesting to note that once the cost has been paid to parallelize a workload, the performance of individual nodes becomes irrelevant compared to overall throughput and efficiency.

Since web programmers are implicitly forced to parallelize their programs through the use of stateless business and display logic, their workloads make a good fit for an efficient highly parallel machine like Blue Gene. It is also important for the survival of a web company that suddenly becomes popular to be able to quickly scale their capacity, something that is difficult to do with commodity hardware that can require weeks of integration effort to bring online an additional thousand nodes. In contrast, a Blue Gene rack of 1024 nodes is validated during manufacture as a single system.

Or, say, SPECjbb2005.

SPECjbb2005 is a Java benchmark that measures the number of business operations a machine can sustain. The benchmark has a multi-JVM mode to deal with non-scalable JVMs. We used this mode and were able to spread the load across 256 Blue Gene nodes by using a harness that transparently forwards the network and filesystem accesses made by each worker.

We were able to run the benchmark across the 256 nodes that were available to us with a per-node performance of 9565 Business Operations per second (BOPS), yielding a reported score of 2.4 million BOPS. It is important to note that the benchmark rules state a requirement of a single operating system image, so we are not able to submit our performance results at this point. However, our initial results show that Blue Gene/P provides a powerful generic platf orm to run complex workloads.

Is the Future Blue?

All of this sounds great, but the reality of IBM's approach is that it relies on PowerPC chips. Sure, you can run Linux on the processors, but how many folks doing open source web work will write code for the Power architecture?

IBM does offer tools now for moving Linux/x86 code over to Power, although there are some performance trade-offs. And performance trade-offs don't go well with screaming scale.

In addition, PowerPC simply fights the momentum of the x86 market.

That said, IBM has issued a novel approach to the utility computing, mega data center problem with this research. It has also given competitors a scare by flashing serious intentions to go after the utility business with systems that require a ton of investment and skill to match.

You can't help but get the feeling that IBM and others are on the right track by exploring these hybrid models which place an emphasis on low-power chips and tight, SMP-like design where needed. Maybe we'll all look back at clusters and laugh in a few years. ®