What goes around...
It is quite normal for old concepts to be repackaged and reproduced as new ideas. For a long time, there has been a view that virtualisation is good because it provides a common platform for development. However, this is creating opportunities for others who can do clever things beneath that platform.
We all know the virtualisation story. There's a piece of technology buried somewhere within a business solution that has its own peculiar mannerisms and needs some specialised skills to make it work effectively. Those skills are always difficult to find so the industry contrives to build a software layer over the various different technologies and creates a common approach to handling all of them.
This soft approach to solutions has a great deal going for it and so it is becoming more and more prevalent. In some cases, we can even end up with new layers of abstraction that seek to unify different abstraction methods that have happened in the past.
Web services can be seen in this way. The new standards for creating loosely-coupled business components is being seen in some quarters as a way to resolve the conflicts that exist between COM, CORBA, J2EE and some of the proprietary EAI techniques.
For solutions developers, virtualisation is a real bonus. It allows heterogeneous environments to be handled using a single code base. The time to market for the solution is much reduced and the business benefits as a result.
The problem that arises with each level of abstraction is that, inevitably, a lowest common denominator is set. The solutions are developed to a set of functions that can be supported across all of the underlying technologies. They do not take into account all of the neat features that the OEM may have introduced to improve efficiency or to make the solution more scalable. As a result, applications do not perform quite as well as they could. It is the operational technicians who have to deal with the day-to-day consequences of this failure.
The obvious example is the way that application developers treat networks. A message has to go from point A to point B. The developer simply uses a standard piece of code out of the SDK that executes that function. Within the SDK, it is most likely that the message will simply get stuck on a basic queue and use standard protocols to reach its destination. It may not be able to take account of the fact that there may be better routing or that multi-cast is available to reduce the pressure on the network.
This is where an opportunity is arising for solutions that sit between the virtualisation layer and whatever technology is being abstracted. These solutions can improve the efficiency of the infrastructure simply by making better use of the underlying features.
The savings we make in reduced time to market are paid for in overall efficiency during the life of the application. When that cost becomes too high, we will revert back to the old ideas. We will start to derive advantage out of specific knowledge of technologies and speed will be king once more. What goes around, comes around.
© ComputerWire. All rights reserved.
Sponsored: Benefits from the lessons learned in HPC