Creasy: Close enough does not count
The M44/44X wasn't much cop. The project leader of CSC, Robert Creasy, observed that the M44/44X "was close enough to a virtual machine system to show that 'close enough' did not count".
But the bit of CSC working on the new project didn't have access to a fancy dual-processor S/360-67.
All they had was a simpler, single-processor S/360-40. So they built an MMU for it, then they built their OS upon the modified machine.
The result was CP-40, a hypervisor that could support up to 14 simultaneous VMs, each one a complete S/360 environment capable of running a complete S/360 OS.
However, at the time, the burden of one complete OS within another was very considerable, so CSC built its own special lighter-weight OS to run as a "client" – today, we'd say "guest" – inside CP-40. The Cambridge Monitor System – CMS – was a simple, single-user OS for interactive use.
Some of its features, although radically different from mainframe OSs of the time, seem obvious now – such as creating files simply by writing to them, and multiple disks that had short, easily-remembered names. (For comparison, think of MS-DOS and Windows' concept of drive letters.)
It was a success. Although it was only a proof-of-concept test system, it worked and worked well.
CSC was given the more powerful S/360-67 machines to play with and rewrote CP-40 to create CP-67, which supported a variable number of VMs and other improvements. The combined system, known as CP/CMS, was much simpler and more efficient than the abortive all-in-one multiuser timesharing TSS/360.
The customers say yes!
To IBM’s great surprise, customers liked it, a lot. CMS was easy and accessible – for the time – and CP-67 meant that each person seemed to have their own personal S/360 to run it on. As CP/CMS had been what might today be called a "skunkworks" product, IBM even freely gave away the source code, so customers could tweak it. It quickly replaced TSS.
CP/CMS did so well that IBM had to backtrack on its already-planned successor range to the S/360, the S/370, and add an MMU as standard. By 1972, CP/CMS had been renamed VM/370 and it was the mainstream OS for the S/370. Nearly 40 years later, a modern version of the same OS, now called z/VM, is still sold and used today on System Z mainframes.
The creation and management of VMs is a native facility of modern zSeries mainframes, although IBM now calls them "logical partitions" or LPARs. For example, the System z9 supports up to 60 LPARs on a single server.
Over the years, the technology has had lots of enhancements: the memory, processor and I/O assignments of LPARs can be changed on the fly, and LPAR-aware OSs and apps will adjust there and then. There’s also a very-high-speed virtual network link, HiperSockets, so that OSs in separate VMs on the same hardware can communicate far faster than they could on a physical network.
Because the hardware was designed with this in mind, performance inside an LPAR is pretty much identical to what it would be on the "bare metal" – so long as no other LPARs are sharing the same resources, of course.
Supported guest OSes include Linux, AIX, Solaris, i5/OS (what used to be OS/400, the OS of the AS/400 minicomputer) and a variety of specialist mainframe OSs such as z/OS, z/VSE and z/TPF.
And, of course, z/VM if you want it. Run z/VM inside an LPAR and then you can subdivide it further, allowing a single mainframe to host thousands of OS images simultaneously. There's even a special budget version that only allows you to host Linux guests, for an entire farm of webservers inside a single cabinet.
Even though it has changed massively, you have to admit: 40 years is a pretty amazing lifetime for a piece of software by anyone's standards, even IBM's.
All the main RISC server vendors support some form of VMs. IBM's own POWER-based Unix RISC servers also support LPARs. On Oracle's SPARC servers, the technology is called Logical Domains, while Hewlett-Packard’s Itanium boxes offer HP Integrity Virtual Machines – either nPARs, at the hardware level, or vPARs at the software level, possibly inside a vPAR.
Despite this, though, only the mainframe has responded to one of the key lessons from the dawn of virtualisation: that while the VM should emulate the same hardware as the host, it’s far simpler and more efficient if the host and the guest are not the same OS. The roles are very different and what’s good for a host generally isn’t good for a guest.
Hosts need to manage the system, not apps – they manage storage, I/O and so on, and possibly user accounts, depending on the desired functionality. This functionality being provided by the host, the guest doesn’t need to offer it – so if it does, functionality is being duplicated, meaning that resources are being wasted.
The vendors of big Unix servers have responded to this in a different way: by introducing a form of partial virtualisation within a monolithic operating system. ®
In the next part of this series, we’ll look at "operating-system-level virtualisation".
Lies, the BSOD existed well before XP, in both the DOS and NT lineages.
IPL your own S/360 guest on VM... ahh happy days!
Thanks for this trip down memory lane!
My first job as a graduate programmer in 1990 was writing code for IBM's NetView network management product and to test our code we needed our "own" NetView system.
So - fire up some JCL to IPL a OS/360 guest on top of VM. Job done.
So 20 years later when I send the guys in my teams on VMWare training courses and they come back all fired up on Virtualisation nirvana I have to chuckle and remember that there is nothing new under the sun :-)
Re: The HARDWARE is the problem
Er, I think you've just specified a mainframe, but one crippled with the x86 baggage.....