This article is more than 1 year old

Before the PC: IBM invents virtualisation

A brief history of virtualisation

Virtualisation is not a novelty. It's actually one of the last pieces of the design of 1960s computers to trickle down to the PC – and only by understanding where it came from and how it was and is used can you begin to see the shape of its future in its PC incarnation.

As described in our first article in this series, current PC virtualisation means either software-assisted (Hyper-V, Xen etc) or all-software (VMware) full-system virtualisation.

Full-system will mean a full-fat server OS running multiple virtual machines, which are each complete emulated PCs with emulated chipset and emulated disk drives, running complete full-fat servers or client OSs. What the mainstream – that is, Windows-using – world seems to have forgotten, if it ever knew at all, is that there are other ways to crack the virtualisation nut, with their own unique benefits.

Virtualisation got really big, really quickly on the PC in three stages. Firstly, VMware showed that it could be done, in defiance of Popek and Goldberg.

Secondly, this caught on to the extent that Intel and AMD added hardware virtualisation to their processors. Thirdly, the rise of multi-core 64-bit machines, with many CPU cores and threads and umpteen gigs of RAM: resources that existing 32-bit OSs and apps can't use effectively, but which virtualisation devours with relish.

PC virtualisation is not ready for the big time just yet

Currently, however, the PC's full-system virtualisation is just about the simplest, most primitive and inefficient kind. When you look at the fancy tools that VMware and Microsoft are creating to provision and manage VMs – and the large-scale rollouts that are starting to occur – it's easy to forget that this is not a mature technology. In fact, PC virtualisation is still in its youth, and the fact that it is starting to show a few hairs on its chin doesn't mean that it is ready for the big time just yet.

Before you can understand how far it has yet to go, though, you need to know a bit of the background. And there's more of it than you might expect.

Before the PC: IBM invents virtualisation

Of course, there is nothing new under the Sun. (Or should that be under the Oracle, these days?) The arrival of ubiquitous virtualisation on the PC could be seen to deliver one of the last pieces of the set of features delivered by IBM’s System/360 computers of the 1960s.

An original member of the System/360 family announced in 1964, the Model 50 was the most powerful unit in the medium price range.

IBM System/360: Hot new tech from the 1960s

Launched in 1964, the S/360 was intended from the start to be a whole range of compatible computers, stretching from relatively small, inexpensive machines to large, high-capacity ones. The S/360 took a radical new approach: all would run the same software, so that programs could be moved from one machine to another without modification – a bold innovation at the time.

Some of the exotic new features of the S/360 might sound familiar: memory addressed in units of fixed-length bytes; a byte always being eight bits; words being 32 bits long. What’s more, the S/360 was the first successful platform to achieve compatibility across different processors using microcode, which again is now a standard feature of most computers.

One of the things that the S/360 didn’t do at first, though, was the then-new feature of time-sharing. IBM systems had traditionally taken a batch-oriented approach: operators submitted "jobs" which the machine scheduled itself to run, without user interaction, whenever enough free resources were available.

Time share

In the mid-1960s, though, interactive computing was becoming popular: people were sitting at terminals, typing commands and getting the response immediately, as opposed to a pile of printouts the next day. But back then, a single computer was too expensive to be dedicated to just one person, so DARPA sponsored "Project MAC," one focus of which was building operating systems that would allow multiple people to use a single machine at once, via dumb terminals.

IBM wanted in on what might be a lucrative new market, so it set up the Cambridge Scientific Centre (CSC) to create a time-sharing version of the S/360. IBM designed a special dual-processor host for the job, the S/360-67, and CSC built a time-sharing OS for it, imaginatively named TSS. The snag is, it never worked satisfactorily.

One of the chief problems was that the S/360 didn't include some of the key features necessary for time-sharing, such as support for virtual memory and what was much later called a memory-management unit (MMU). For the PC, this has been no big deal since the Intel '386 appeared in 1985 – a good two decades later.

Mind you, it took until 1993 for Windows NT 3.1 to appear, the first edition of Microsoft's OS properly equipped to exploit these featurer. Users of SCO Xenix, among other Unices, had been happily multitasking with 386s for about five years by then. Soon after, so had intrepid users of Windows/386 2.1 and later Windows 3 in Enhanced Mode – if they were lucky and it didn't bluescreen on them, anyway.

Multics

But back to the 1960s. MIT, home of Project MAC, turned down IBM's flakey TSS/360 and went with a 36-bit General Electric mainframe instead and developed a time-sharing OS called Multics.

Multics memorabilia - badge captioned "You never outgrow your need for MULTICS"You might well never have heard of Multics – the last machine running it was shut down in 2000 – but you will have heard of the OS it inspired: Unix.

Unix was conceived as a sort of anti-Multics – "Uni" versus "multi", geddit? Unix was mean to be small and simple, as opposed to the large, complicated Multics. Consider the labyrinthine complexity of modern Unix and ponder what Multics must have been like.

Another famous offspring of Project MAC was the MIT AI Lab, from which sprang Richard Stallman, Emacs, the GNU Project and the Free Software movement. It all worked out in the end, but you might like to reflect for a moment on the rarity of 36-bit hardware or Multics systems today. Project Mac's legacy was not products or technology, but rather a pervasive influence over the future of computing.

When Project MAC went off in its own, non-IBM direction, it left IBM's CSC division with nothing to do. In the hope of survival, CSC decided to press on with a different approach.

It took some lessons from an earlier IBM virtualisation project, the M44/44X, based on the pre-S/360 IBM 7000 series mainframe. The M44/44X was an attempt to implement partial virtualisation.

This was conceptually comparable to the modern open-source Xen hypervisor. On x86 CPUs without hardware virtualisation, Xen can't trap (ie, catch and safely run) all of the instruction set without hardware assistance, so it requires guest OSs to be modified so that they don't use the instructions Xen can't handle.

Today, this is called paravirtualisation: guests can only use a subset of the features of the host. Back in the early 1960s, IBM's M44 did much the same: it implemented what its developers called a "virtual machine," the 44X, which was just that critical bit simpler than the host.

Cambridge skunkworks

Robert Creasy

Creasy: Close enough does not count

The M44/44X wasn't much cop. The project leader of CSC, Robert Creasy, observed that the M44/44X "was close enough to a virtual machine system to show that 'close enough' did not count".

But the bit of CSC working on the new project didn't have access to a fancy dual-processor S/360-67.

All they had was a simpler, single-processor S/360-40. So they built an MMU for it, then they built their OS upon the modified machine.

The result was CP-40, a hypervisor that could support up to 14 simultaneous VMs, each one a complete S/360 environment capable of running a complete S/360 OS.

However, at the time, the burden of one complete OS within another was very considerable, so CSC built its own special lighter-weight OS to run as a "client" – today, we'd say "guest" – inside CP-40. The Cambridge Monitor System – CMS – was a simple, single-user OS for interactive use.

Some of its features, although radically different from mainframe OSs of the time, seem obvious now – such as creating files simply by writing to them, and multiple disks that had short, easily-remembered names. (For comparison, think of MS-DOS and Windows' concept of drive letters.)

It was a success. Although it was only a proof-of-concept test system, it worked and worked well.

CSC was given the more powerful S/360-67 machines to play with and rewrote CP-40 to create CP-67, which supported a variable number of VMs and other improvements. The combined system, known as CP/CMS, was much simpler and more efficient than the abortive all-in-one multiuser timesharing TSS/360.

The customers say yes!

To IBM’s great surprise, customers liked it, a lot. CMS was easy and accessible – for the time – and CP-67 meant that each person seemed to have their own personal S/360 to run it on. As CP/CMS had been what might today be called a "skunkworks" product, IBM even freely gave away the source code, so customers could tweak it. It quickly replaced TSS.

CP/CMS did so well that IBM had to backtrack on its already-planned successor range to the S/360, the S/370, and add an MMU as standard. By 1972, CP/CMS had been renamed VM/370 and it was the mainstream OS for the S/370. Nearly 40 years later, a modern version of the same OS, now called z/VM, is still sold and used today on System Z mainframes.

The creation and management of VMs is a native facility of modern zSeries mainframes, although IBM now calls them "logical partitions" or LPARs. For example, the System z9 supports up to 60 LPARs on a single server.

Over the years, the technology has had lots of enhancements: the memory, processor and I/O assignments of LPARs can be changed on the fly, and LPAR-aware OSs and apps will adjust there and then. There’s also a very-high-speed virtual network link, HiperSockets, so that OSs in separate VMs on the same hardware can communicate far faster than they could on a physical network.

Because the hardware was designed with this in mind, performance inside an LPAR is pretty much identical to what it would be on the "bare metal" – so long as no other LPARs are sharing the same resources, of course.

Supported guest OSes include Linux, AIX, Solaris, i5/OS (what used to be OS/400, the OS of the AS/400 minicomputer) and a variety of specialist mainframe OSs such as z/OS, z/VSE and z/TPF.

And, of course, z/VM if you want it. Run z/VM inside an LPAR and then you can subdivide it further, allowing a single mainframe to host thousands of OS images simultaneously. There's even a special budget version that only allows you to host Linux guests, for an entire farm of webservers inside a single cabinet.

Even though it has changed massively, you have to admit: 40 years is a pretty amazing lifetime for a piece of software by anyone's standards, even IBM's.

Guest appearance

All the main RISC server vendors support some form of VMs. IBM's own POWER-based Unix RISC servers also support LPARs. On Oracle's SPARC servers, the technology is called Logical Domains, while Hewlett-Packard’s Itanium boxes offer HP Integrity Virtual Machines – either nPARs, at the hardware level, or vPARs at the software level, possibly inside a vPAR.

Despite this, though, only the mainframe has responded to one of the key lessons from the dawn of virtualisation: that while the VM should emulate the same hardware as the host, it’s far simpler and more efficient if the host and the guest are not the same OS. The roles are very different and what’s good for a host generally isn’t good for a guest.

Hosts need to manage the system, not apps – they manage storage, I/O and so on, and possibly user accounts, depending on the desired functionality. This functionality being provided by the host, the guest doesn’t need to offer it – so if it does, functionality is being duplicated, meaning that resources are being wasted.

The vendors of big Unix servers have responded to this in a different way: by introducing a form of partial virtualisation within a monolithic operating system. ®

In the next part of this series, we’ll look at "operating-system-level virtualisation".

More about

TIP US OFF

Send us news


Other stories you might like