Feeds

Autonomic Computing – the IBM blueprint

Broad Brush

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

IBM has been talking about autonomic computing for well over a year. This month it issued a 40-page blueprint (pdf), so what is it, why do we need it, how does it work, is it important and have IBM got it right, asks Peter Abrahams, of Bloor Research?

Autonomic computing is IBM's term for the ability of systems to be more self-managing. The term "autonomic" comes from the autonomic nervous system, which controls many organs and muscles in the human body. Usually, we are unaware of its workings because it functions in an involuntary, reflexive manner.

So what does it mean for an IT environment to be autonomic? It means that systems are:

  • Self-configuring, to increase IT responsiveness/agility
  • Self-healing, to improve business resiliency
  • Self-optimizing, to improve operational efficiency
  • Self-protecting, to help secure information and resources

Why do we need it? The computing systems we have developed over the last ten years have become complex meshes of inter-related applications and servers. Keeping these running smoothly is a time and people intensive activity that does not always succeed. But you ain't seen nothing yet! Loosely coupled web services, outsourcing of parts of the environment and the implementation of more complex and larger applications all point to the fact that the manual management of the systems will become impossible. The answer is to give this large problem to the computer to fix.

How does it work? The solution covers elements in all levels from the base hardware platforms, through the various software layers to the business processes. Each element must include sensors that collect information about state and transitions, and effectors that can alter the elements configuration.

The autonomic management layer has four major components: the monitor that collects information from the sensors, the analyser that correlates and models the state of the systems and predicts and reports on issues, the planning function which takes these issues and develops solutions, and then an execution element puts the plan into effect through the effectors. The management layer is an element of the total system that must also be managed so it has its own sensors and effectors. This structure enables hierarchies of management and also peer to peer management functions.

Finally in the management layer there is a knowledge base that includes system topology, calendars, activity logs and policy information.

The blueprint also makes it clear that autonomic computing is a journey and defines basic, managed, predictive, adaptive and autonomic as the steps along the way.

Is it important? The problem is real so a solution is important. IBM is a major player and solving this problem is essential to their e-business on-demand strategy. E-business on-demand, we are told, is pivotal to IBM's future direction. So the blueprint is important as it will have an impact on the market. IBM is working very closely on related standards such as DMTF-CIM, IETF-SNMP, OASIS-WS-S and WS-DM, SNIA-BlueFin, GGF-OGSA, Open Group-ARM and will be using the blueprint to guide their input to the standards.

Has IBM got it right? IBM has a very wide view of the issues as its provides hardware, software, services and increasingly out-sourcing e-business on-demand utility computing. The blueprint is similarly broad and this is its strength as well as its weakness.

It is an excellent overall vision and gives a structure for understanding all the initiatives in this space from standards bodies and other vendors; but is more difficult to understand than the more targeted solutions and messages from 'niche' players like CA, HP and Microsoft.

© IT-Analysis.com

IBM's autonomic computing site

Secure remote control for conventional and virtual desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?