This article is more than 1 year old

Grid computing: a real-world solution?

It appears so

Analysis The problem with grid computing has traditionally been tying it down into a real-world context. The theory is great – getting lots of individual technical components working together as if they were one big resource - but it’s the wackier or conversation stimulating applications that have received all of the attention.

Everyone donating a little bit of their PC’s power when they are not using it to help in the search for extraterrestrials is the kind of thing you can talk about in the pub, wine bar or over dinner. Same with the notion that the day will come when we no longer rely computers all over the place in our homes and businesses but simply consume processing power through a socket in the wall as we do with electricity.

All interesting stuff, so it’s no wonder that people latch onto the scientific and utility computing aspects of grid.

Start telling your non-IT friends about average utilisation rates in your data centre or computer room, though, or the hassle you have to go through when the sales department wants to beef up the call centre system because it’s running like a dog, and they soon start yawning.

And this is one of the biggest challenges with grid - the perception of what it’s for. An emphasis on the interesting and unusual creates the impression of it being very niche or futuristic. But in reality, grid’s greatest potential impact in the immediate term is undoubtedly in addressing the really boring and tedious operational problems that IT departments struggle with on a daily basis – systems maintenance, coping with growth and shrinkage, squeezing more out of hardware so available budget can be spent on other things, trying to keep users happy, whatever they try to run and when, etc.

When we consider grid computing in this context, the term often used is “Enterprise Grid”, as opposed to “Scientific Grid”, “Utility Computing” or that very optimistic term “The Global Grid”, which is based on the notion that one day, all computers will be joined up and will work in harmony to solve the entire world’s computing problems for free.

Back in the here and now, Enterprise Grid is really just the next step in the evolution of computing architectures. If we take concepts such as server virtualisation and clustering, and add a degree of automation to them so physical servers can be allocated to and de-allocated from different workloads with no or minimal manual intervention, then we get the processing power dimension of grid. If we add similar automation to storage and databases, we have the data dimension. With automatic or semi-automatic provisioning and de-provisioning of hardware and software assets based on changing demand, we can reduce the need for the PFY (Pimply Faced Youth) to run around rebuilding servers manually with all of the risks and delays that go with that.

Such capability can enable a more responsive service to users as well as taking pain, overhead and cost out of IT operations. If you can move lots of small servers around quickly to where they are needed, there is also less need to oversize individual servers to cope with the peaks and troughs of normal use. Taking a really simple example, if one application peaks whilst another troughs and vice versa, a grid computing environment will simply switch resources between the two as appropriate.

This is the theory, but can people out there in the real world of IT relate to it?

Well increasingly, the answer to this is “Yes”. As part of recent Quocirca study commissioned by Oracle to generate statistics for its recently publicised Grid Index, we measured an average level of knowledge in Europe (self declared) of 4.7 on a scale of 0 to 10, where 0 = “Completely unfamiliar” with grid and 10 = “Deep understanding”. In itself, this might not seem very impressive, but when you consider that 9 months earlier, the average level of familiarity we measured was just 2.2, it is clear that the number of IT professionals taking notice and getting educated on grid is growing rapidly. Furthermore, average knowledge levels for the virtualisation technologies that underpin any grid architecture were between 6 and 7, with similar growth.

Without getting too bogged down by statistics, though, one of the most valuable aspects of this kind of research is the way we can pull out interesting correlations. For example, appreciation of the operational and service level benefits was directly proportional to familiarity, suggesting that the relevance of grid becomes clear as people begin to understand it – i.e. it is not all vendor hype. Another revealing observation was that server utilisation was significantly higher amongst early adopters of grid and virtualisation technologies, confirming that the theoretical efficiency gains in this area are real.

At a more strategic level, we discovered that commitment to grid computing and service oriented architectures (SOA) go hand in hand. This should not be a surprise as the component based software model that’s usually associated with SOA leads to more flexibility and finer control for those investing in grid. Conversely, a grid environment helps get the best out of component based software architectures for those investing in SOA. Wherever you start, one will naturally lead to the other.

But there were also a number of challenges highlighted in the research ranging from concerns over solution maturity, through skills availability, to the fluidity of standards, all of which represent calls to action for IT vendors to continue investing in R&D, education and collaboration, both amongst themselves and with standards bodies and special interest groups.

Nevertheless, bearing in mind that grid computing is enabled by an evolutionary set of technologies, it is not necessary for everything to be in place at once for IT departments to start moving in that direction. Whether you buy from Oracle, IBM, HP, Dell or any other major infrastructure solutions vendor, it is probably time to start asking them about virtualisation and grid options when you are next in the market for server hardware and/or software. They all have offerings of one kind or another in this space and looking at these technologies in the context of a real world project or bid is probably not bad thing to do. You can always defer adoption if you don’t like what they come up with but at least you will have received some free education on a rapidly developing part of the market.

In the meantime, more details of the research referred to in this article, which was based on interviews with 1,350 IT professionals worldwide, are available in a free Quocirca report entitled Grid Computing Update. An Oracle document discussing geographic variation in grid related activity based on the same study is available here (PDF).

Related stories

Grid computing meets data flow challenge
Dutch turn town into supercomputer
Globus Consortium takes grid computing to the office

More about

TIP US OFF

Send us news


Other stories you might like