Original URL: https://www.theregister.com/2005/12/07/virtual_agile/

Go Virtual to become Agile

Getting just what you need from a virtual pot

By Martin Banks

Posted in On-Prem, 7th December 2005 17:17 GMT

The bigger the IT user, the more likely they are to suffer the slings and arrows of outrageous server under-utilisation. Most of the major research companies have, at some time or another, studied the use of server resources in a production environment and found it low. A typical server – you know the type of beast: dual Xeons, a Gigabyte or two of memory and a reasonable RAID array – is usually only doing something productive for around 20 percent of its life. The rest of the time, it sits there, idling away the hours, doing nothing.

This is a by-product of two factors: the way that servers have developed as stand-alone entities, and the buying patterns of users; if users have needed more resources, they have purchased more servers rather than exploit the resources already standing around doing nothing. To be fair to users, finding ways of exploiting those existing resources has been no easy task as no clear-cut technological solution to the problem has existed until now. That solution is virtualisation - the ability to "build" virtual servers as and when they are required out of an existing set of IT resources.

Virtualization II

The technology is based on the ability to partition a computer so that it can run more than one task. The basics are in fact old technology from the mainframe era, but they have now been applied far more widely. The recent development by Intel of Virtual Technology, which builds-in the ability to partition individual processors, now means that the concepts of virtualisation can be taken down to the level of the individual PC. In this case, it will be possible to partition each processor so that it runs multiple environments – a Windows application in one and a Linux application in another, for example. Each will run independently of the other, and any problems with one will not crash the other. Processors with VT technology will be available during 2006.

It is at the server level that the most important benefits of virtualisation will be found, however. Those benefits are relatively simple to state: greater use of available resources, which in turn will mean a reduction in (or a greater return on) server investments over the long haul; greater flexibility and operational agility through a much greater chance that applications can be run when they are needed without waiting for additional resources to be purchased; and better system management and lower long term running costs through the centralisation of more powerful server resources.

The words "more powerful server resources" will be for many users (and the larger the user the more likely is this to be the case) an implied threat of more need for investment, and in the short term that may be true. In the long haul, however, virtualization has the potential to slow the rate of investment and generate a better return. This is because the best approach to virtualisation is to consolidate the servers – effectively replacing a plethora of dispersed, individual servers with more centralised datacentres built around racks of standard servers that lie at the heart of the corporate network infrastructure.

Datacentres give users the flexibility to change, adapt or grow their business processes in close to real time, in order to meet changing business needs. They can, subject to the provisions of an application’s licence requirements, install and run an application size slice of server resource needed, at the time it is needed. That same hardware resource can then be used to run a different application once that specific task is completed. Most important of all, should a task become a high priority requiring the commitment of significant resources – a classic topical example being the workload generated in processing and fulfilling orders generated by a Christmas marketing campaign – those resources can be made available without purchasing yet more servers.

The flashing blade

The hardware for such datacentres is, for the moment, most commonly a 2- or 4-processor rack-mounted server, sometime called a `brick’. The coming thing, however, is the Blade server, a thinner unit that shares resources such as power supplies, communications and even storage. These can be densely packed together and come with a racking system that provides the power supplies, connectivity and other required services. As well as processor Blades, other types are now becoming available for managing specific services, such as networks, attached storage and the like.

The key advantage is that it is a simple task to expand such an environment by adding more Blades. So there are now two possible approaches to scaling the system. One is what might be called the "permanent" approach, adding more hardware resources in the form of more Blade servers, which can be achieved without halting the datacentre’s operations. The other is the purely "virtual" approach, where the flexible nature of consolidated hardware and virtualisation technologies allows the available resources to be reassigned as and when required.

The other key component in advancing virtualisation is the necessary management software, which monitors and controls the operations of the individual servers, assigns workloads to them and delivers results to the appropriate recipient users or systems. In this way, the correct number of servers, together with the appropriate operating environment, can be made available at the time an application needs to run. When the task is completed, the application and operating system are uninstalled and the resource is made available to the next task. This approach has the potential to not only reduce the ongoing investments needed in hardware but also reduce the cost of managing the infrastructure, if only through the centralization of resources into a limited number of locations.

Raising standards

There are two other factors of importance with virtualisation and scalable systems these days, and these are standardization and interoperability. The fundamental goal is that, as far as possible, anything should work with anything else. For example, as the three dominant operating systems – Windows, Unix and Linux – all run native on the x86 architecture it is not surprising that processors using this are the most common hardware platform. Though other processor types are used, particularly for proprietary Unix applications, they are in the minority. When it comes to management software, there are a number of contenders, including IBM’s Tivoli and Director suites, HP’s OpenView and Microsoft’s MOM, but even here the major players now all make a point of ensuring that their management systems interoperate with the rivals.

So virtualisation has the potential to create operational infrastructures that really do allow users to build far more agile and flexible business processes that can be scaled to meet both occasional and permanent increases in demand, while reducing both investment and management costs.®