Feeds

IBM explores 67.1m-core computer for running entire internet

Ruby on Rails on Rails on Rails on Rails

The essential guide to IT transformation

To run, er, the entire internet, IBM looks to craft more flexible systems. To that end, the researchers have presented their case for splitting software jobs across a Blue Gene computer, putting in track-able administrative controls, adding sophisticated error checks and booting software over the network.

Interested parties can find the gory software details in the report (PDF), although we'll summarize by saying that IBM is making heavy use of Linux, a hypervisor microkernel, network-based management, software appliances and a quasi-stateless approach.

In sample jobs done with prototype systems, IBM thinks they perform pretty well.

We experimented with Web 2.0 applications that are typically constructed from a LAMP stack (that is Linux, Apache, MySQL and PHP). We package the PHP business logic and Apache webserver in a 20MB appliance. By separating the database from the rest of the application stack, the nodes remain stateless. It is interesting to note that once the cost has been paid to parallelize a workload, the performance of individual nodes becomes irrelevant compared to overall throughput and efficiency.

Since web programmers are implicitly forced to parallelize their programs through the use of stateless business and display logic, their workloads make a good fit for an efficient highly parallel machine like Blue Gene. It is also important for the survival of a web company that suddenly becomes popular to be able to quickly scale their capacity, something that is difficult to do with commodity hardware that can require weeks of integration effort to bring online an additional thousand nodes. In contrast, a Blue Gene rack of 1024 nodes is validated during manufacture as a single system.

Or, say, SPECjbb2005.

SPECjbb2005 is a Java benchmark that measures the number of business operations a machine can sustain. The benchmark has a multi-JVM mode to deal with non-scalable JVMs. We used this mode and were able to spread the load across 256 Blue Gene nodes by using a harness that transparently forwards the network and filesystem accesses made by each worker.

We were able to run the benchmark across the 256 nodes that were available to us with a per-node performance of 9565 Business Operations per second (BOPS), yielding a reported score of 2.4 million BOPS. It is important to note that the benchmark rules state a requirement of a single operating system image, so we are not able to submit our performance results at this point. However, our initial results show that Blue Gene/P provides a powerful generic platf orm to run complex workloads.

Is the Future Blue?

All of this sounds great, but the reality of IBM's approach is that it relies on PowerPC chips. Sure, you can run Linux on the processors, but how many folks doing open source web work will write code for the Power architecture?

IBM does offer tools now for moving Linux/x86 code over to Power, although there are some performance trade-offs. And performance trade-offs don't go well with screaming scale.

In addition, PowerPC simply fights the momentum of the x86 market.

That said, IBM has issued a novel approach to the utility computing, mega data center problem with this research. It has also given competitors a scare by flashing serious intentions to go after the utility business with systems that require a ton of investment and skill to match.

You can't help but get the feeling that IBM and others are on the right track by exploring these hybrid models which place an emphasis on low-power chips and tight, SMP-like design where needed. Maybe we'll all look back at clusters and laugh in a few years. ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.