This article is more than 1 year old

Autonomic Computing – the IBM blueprint

Broad Brush

IBM has been talking about autonomic computing for well over a year. This month it issued a 40-page blueprint (pdf), so what is it, why do we need it, how does it work, is it important and have IBM got it right, asks Peter Abrahams, of Bloor Research?

Autonomic computing is IBM's term for the ability of systems to be more self-managing. The term "autonomic" comes from the autonomic nervous system, which controls many organs and muscles in the human body. Usually, we are unaware of its workings because it functions in an involuntary, reflexive manner.

So what does it mean for an IT environment to be autonomic? It means that systems are:

  • Self-configuring, to increase IT responsiveness/agility
  • Self-healing, to improve business resiliency
  • Self-optimizing, to improve operational efficiency
  • Self-protecting, to help secure information and resources

Why do we need it? The computing systems we have developed over the last ten years have become complex meshes of inter-related applications and servers. Keeping these running smoothly is a time and people intensive activity that does not always succeed. But you ain't seen nothing yet! Loosely coupled web services, outsourcing of parts of the environment and the implementation of more complex and larger applications all point to the fact that the manual management of the systems will become impossible. The answer is to give this large problem to the computer to fix.

How does it work? The solution covers elements in all levels from the base hardware platforms, through the various software layers to the business processes. Each element must include sensors that collect information about state and transitions, and effectors that can alter the elements configuration.

The autonomic management layer has four major components: the monitor that collects information from the sensors, the analyser that correlates and models the state of the systems and predicts and reports on issues, the planning function which takes these issues and develops solutions, and then an execution element puts the plan into effect through the effectors. The management layer is an element of the total system that must also be managed so it has its own sensors and effectors. This structure enables hierarchies of management and also peer to peer management functions.

Finally in the management layer there is a knowledge base that includes system topology, calendars, activity logs and policy information.

The blueprint also makes it clear that autonomic computing is a journey and defines basic, managed, predictive, adaptive and autonomic as the steps along the way.

Is it important? The problem is real so a solution is important. IBM is a major player and solving this problem is essential to their e-business on-demand strategy. E-business on-demand, we are told, is pivotal to IBM's future direction. So the blueprint is important as it will have an impact on the market. IBM is working very closely on related standards such as DMTF-CIM, IETF-SNMP, OASIS-WS-S and WS-DM, SNIA-BlueFin, GGF-OGSA, Open Group-ARM and will be using the blueprint to guide their input to the standards.

Has IBM got it right? IBM has a very wide view of the issues as its provides hardware, software, services and increasingly out-sourcing e-business on-demand utility computing. The blueprint is similarly broad and this is its strength as well as its weakness.

It is an excellent overall vision and gives a structure for understanding all the initiatives in this space from standards bodies and other vendors; but is more difficult to understand than the more targeted solutions and messages from 'niche' players like CA, HP and Microsoft.

© IT-Analysis.com

IBM's autonomic computing site

More about

TIP US OFF

Send us news


Other stories you might like