Grid Computing: mainstream, or not?
Analyst firm takes issue with IBM claims
Opinion IBM has announced that Grid Computing is now a mainstream technology. In a press release just issued, it cited three of its customers’ applications as evidence of this.
This claim of mainstream status is very bold given Grid Computing’s science and research heritage. One of the most widely reported Grid applications, for example, has been Seti@home, an initiative set up to make use of the idle time of home PCs to assist in the search for extra terrestrial intelligence. The idea was to create as much computing power as a commercial supercomputer for five per cent of the cost by stealing a bit of resource from a few hundred thousand PCs.
Interesting though it is to the X-File brigade, Seti@home is hardly something that’s relevant to the job of the average overworked sysadmin trying to keep a load of demanding and ungrateful users happy running a bunch of servers on a shoestring budget. This is the core of mainstream as far as most of us are concerned.
That’s why it was a bit of a letdown reading the details of IBM’s “mainstream” examples. When we look at the Sal. Oppenheim private banking institute in Germany, we see that it is using Grid to deal with compute-intensive simulations for optimising price and risk analysis. We then have the Institut Français du Pétrole (IFP) running simulations in the areas of exploration and reservoir engineering, drilling and production, and car engine combustion. And to finish off, we have the Italian National Agency for New Technologies, Energy and the Environment (ENEA), applying Grid to yet more compute intensive research problems.
These examples are arguably a little closer to the rest of us than searching for little green men, but not that that much. This underlines IBM’s apparent belief that Grid is still something that relates only to research environments – albeit more “mainstream” ones.
In theory, though, there is no reason why Grid Computing cannot be brought to bear in the average computer room and data centre running the usual mix of boring old business applications. Treating physical IT assets such as servers and storage systems as a single resource pool that can be dipped into whenever an application needs something can potentially have significant benefits. No longer do application servers need to be sized for peak activity and sit there just ticking over for the rest of the time. No longer do applications hit a wall or grind to snail’s pace because of an unexpected demand that we couldn’t react to in time because of budget or resource constraints.
And it’s in the area of benefits that the IBM examples become more interesting as they illustrate some of these principles. IFP, for example, has reportedly achieved 70 per cent server utilisation against a generally accepted industry average of less than 40 per cent for servers in a typical business environment. ENEA has improved service levels to their users and saved considerably on maintenance costs.
But it all still sounds pretty far removed from where most IT departments are today so how do we bring it back to the real world?
The key to doing something practical with Grid is not to focus purely on the nirvana of automatic allocation and deallocation of computing resources to different applications on demand – which is essentially what Grid is about. As discussed in a recent Quocirca study of Grid related activity in the real European mainstream, much of the benefit can be unlocked my moving forward in smaller more manageable steps using storage and server virtualisation technologies. In fact, the results of this study suggest that many organisations are already moving in the direction of Grid without necessarily realising it.
And IBM too, without necessarily admitting it, is already is a big player in the Grid environment – but seems loathe to bring this into its more mainstream OnDemand message, perhaps for competitive positioning reasons. That it has proven capabilities goes without saying. That it runs the risk of losing market to the likes of Oracle and HP with their more mainstream messaging is worrying.
IBM’s claim of Grid's arrival into the mainstream is clearly exaggerated when we look at the bigger picture of the entire marketplace. But it is true that the move to Grid and Utility architectures is probably inevitable for many organisations and, in many cases in the longer term, may even be the only way for all the sysadmins out there to hold things together.
This is even more reason why IBM should be thinking a little more about the genuine mainstream when it considers the messages it is sending to the market in this area.
Grid computing gets EC backing
Brits to demo world's largest computing grid
Earth to disappear from alien radar
Oracle, HP, Intel and Sun start YAGCSB*
Sun subscriptions become model for growth
Grid and Web Services to converge
Oracle 10g grids at tenth of the price
Oracle ships 10g application server for grids
Grid Computing for real
Is grid computing finally a reality?
Sponsored: 2016 Cyberthreat defense report