Feeds

Hidden by virtualization: Grid computing

Something to it after all?

Reducing the cost and complexity of web vulnerability management

Blog I read a good article from our pal Michael Feldman on Digipede, a ‘pure-play’ grid software company that focuses exclusively on Microsoft and their Windows/.NET products. It prompted some thoughts on my part about grid computing, and how it might play an increasingly large role in the future.

For the uninitiated, grid computing allows a single software job to be parceled up and sent out to a bunch of different nodes for completion. The master node in the grid divvies up the work, checks on progress, reallocates jobs if necessary, and assembles the final results. In a lot of ways, it’s like a really smart scheduler.

Back around the turn of the century, grid computing was supposed to be the next big thing. It was supposed to transform both HPC and corporate computing in deep and fundamental ways – in a good way (faster, cheaper, better, and so on). While grids are widely used in HPC, they really didn’t catch on in corporate computing; the grid concept seemed to drown in the wave of virtualization that was then just starting to break.

However, there is an emerging case for grid computing in the corporation. I think we are poised to see it coming around again – but probably not called ‘grid’. In my mind, the trend toward more analytics will require grids or grid-like capabilities. I believe that businesses are going to increase their use of analytics piecemeal – dipping their toes in the water with reasonably small projects rather than going whole hog for the, well, whole analytics hog.

While many in the vendor community believe (hope?) that companies will buy their turn-key analytics appliances or integrated bundles, I think that most customers will opt to use a combination of new, general-purpose systems and their old stuff. Because of this approach, virtualization is a key enabler - it will allow customers to run analytics workloads on systems hosting other stuff as well.

But while virtualization mechanisms do have schedulers and some workload management capabilities, they don’t have the functionality of a full-fledged grid suite. With grid, these big analytics jobs can be automatically parsed out to any or all systems on the grid – without disturbing existing workloads.

The head node is smart enough to manage the process and handle problems like failed systems along the way. It will even give you a polite ‘ding’ when your job is done, and the results are ready to view. (Ok, I’m making that part up, but I’m sure it could be scripted.)

Virtualization vendors would be smart to take a close look at grid functions and then find a way to add them to their virtualization suites. This takes them yet another step away from relying on the rapidly commoditizing hypervisor as a source of differentiation and profits, plus it gives them new benefits to tout. It would also put them in a position to pave the way for customers adding more HPC-like apps (ala analytics) rather than having to play catch-up. ®

Choosing a cloud hosting partner with confidence

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.