Before the PC: IBM invents virtualisation

A brief history of virtualisation

Business security measures using SSL

Virtualisation is not a novelty. It's actually one of the last pieces of the design of 1960s computers to trickle down to the PC – and only by understanding where it came from and how it was and is used can you begin to see the shape of its future in its PC incarnation.

As described in our first article in this series, current PC virtualisation means either software-assisted (Hyper-V, Xen etc) or all-software (VMware) full-system virtualisation.

Full-system will mean a full-fat server OS running multiple virtual machines, which are each complete emulated PCs with emulated chipset and emulated disk drives, running complete full-fat servers or client OSs. What the mainstream – that is, Windows-using – world seems to have forgotten, if it ever knew at all, is that there are other ways to crack the virtualisation nut, with their own unique benefits.

Virtualisation got really big, really quickly on the PC in three stages. Firstly, VMware showed that it could be done, in defiance of Popek and Goldberg.

Secondly, this caught on to the extent that Intel and AMD added hardware virtualisation to their processors. Thirdly, the rise of multi-core 64-bit machines, with many CPU cores and threads and umpteen gigs of RAM: resources that existing 32-bit OSs and apps can't use effectively, but which virtualisation devours with relish.

PC virtualisation is not ready for the big time just yet

Currently, however, the PC's full-system virtualisation is just about the simplest, most primitive and inefficient kind. When you look at the fancy tools that VMware and Microsoft are creating to provision and manage VMs – and the large-scale rollouts that are starting to occur – it's easy to forget that this is not a mature technology. In fact, PC virtualisation is still in its youth, and the fact that it is starting to show a few hairs on its chin doesn't mean that it is ready for the big time just yet.

Before you can understand how far it has yet to go, though, you need to know a bit of the background. And there's more of it than you might expect.

Before the PC: IBM invents virtualisation

Of course, there is nothing new under the Sun. (Or should that be under the Oracle, these days?) The arrival of ubiquitous virtualisation on the PC could be seen to deliver one of the last pieces of the set of features delivered by IBM’s System/360 computers of the 1960s.

An original member of the System/360 family announced in 1964, the Model 50 was the most powerful unit in the medium price range.

IBM System/360: Hot new tech from the 1960s

Launched in 1964, the S/360 was intended from the start to be a whole range of compatible computers, stretching from relatively small, inexpensive machines to large, high-capacity ones. The S/360 took a radical new approach: all would run the same software, so that programs could be moved from one machine to another without modification – a bold innovation at the time.

Some of the exotic new features of the S/360 might sound familiar: memory addressed in units of fixed-length bytes; a byte always being eight bits; words being 32 bits long. What’s more, the S/360 was the first successful platform to achieve compatibility across different processors using microcode, which again is now a standard feature of most computers.

One of the things that the S/360 didn’t do at first, though, was the then-new feature of time-sharing. IBM systems had traditionally taken a batch-oriented approach: operators submitted "jobs" which the machine scheduled itself to run, without user interaction, whenever enough free resources were available.

Time share

In the mid-1960s, though, interactive computing was becoming popular: people were sitting at terminals, typing commands and getting the response immediately, as opposed to a pile of printouts the next day. But back then, a single computer was too expensive to be dedicated to just one person, so DARPA sponsored "Project MAC," one focus of which was building operating systems that would allow multiple people to use a single machine at once, via dumb terminals.

IBM wanted in on what might be a lucrative new market, so it set up the Cambridge Scientific Centre (CSC) to create a time-sharing version of the S/360. IBM designed a special dual-processor host for the job, the S/360-67, and CSC built a time-sharing OS for it, imaginatively named TSS. The snag is, it never worked satisfactorily.

One of the chief problems was that the S/360 didn't include some of the key features necessary for time-sharing, such as support for virtual memory and what was much later called a memory-management unit (MMU). For the PC, this has been no big deal since the Intel '386 appeared in 1985 – a good two decades later.

Mind you, it took until 1993 for Windows NT 3.1 to appear, the first edition of Microsoft's OS properly equipped to exploit these featurer. Users of SCO Xenix, among other Unices, had been happily multitasking with 386s for about five years by then. Soon after, so had intrepid users of Windows/386 2.1 and later Windows 3 in Enhanced Mode – if they were lucky and it didn't bluescreen on them, anyway.


But back to the 1960s. MIT, home of Project MAC, turned down IBM's flakey TSS/360 and went with a 36-bit General Electric mainframe instead and developed a time-sharing OS called Multics.

Multics memorabilia - badge captioned "You never outgrow your need for MULTICS"You might well never have heard of Multics – the last machine running it was shut down in 2000 – but you will have heard of the OS it inspired: Unix.

Unix was conceived as a sort of anti-Multics – "Uni" versus "multi", geddit? Unix was mean to be small and simple, as opposed to the large, complicated Multics. Consider the labyrinthine complexity of modern Unix and ponder what Multics must have been like.

Another famous offspring of Project MAC was the MIT AI Lab, from which sprang Richard Stallman, Emacs, the GNU Project and the Free Software movement. It all worked out in the end, but you might like to reflect for a moment on the rarity of 36-bit hardware or Multics systems today. Project Mac's legacy was not products or technology, but rather a pervasive influence over the future of computing.

When Project MAC went off in its own, non-IBM direction, it left IBM's CSC division with nothing to do. In the hope of survival, CSC decided to press on with a different approach.

It took some lessons from an earlier IBM virtualisation project, the M44/44X, based on the pre-S/360 IBM 7000 series mainframe. The M44/44X was an attempt to implement partial virtualisation.

This was conceptually comparable to the modern open-source Xen hypervisor. On x86 CPUs without hardware virtualisation, Xen can't trap (ie, catch and safely run) all of the instruction set without hardware assistance, so it requires guest OSs to be modified so that they don't use the instructions Xen can't handle.

Today, this is called paravirtualisation: guests can only use a subset of the features of the host. Back in the early 1960s, IBM's M44 did much the same: it implemented what its developers called a "virtual machine," the 44X, which was just that critical bit simpler than the host.

Secure remote control for conventional and virtual desktops

More from The Register

next story
Oi, Tim Cook. Apple Watch. I DARE you to tell me, IN PERSON, that it's secure
State attorney demands Apple CEO bows the knee to him
4K-ing excellent TV is on its way ... in its own sweet time, natch
For decades Hollywood actually binned its 4K files. Doh!
Phones 4u website DIES as wounded mobe retailer struggles to stay above water
Founder blames 'ruthless network partners' for implosion
DARPA-backed jetpack prototype built to make soldiers run faster
4 Minute Mile project hatched to speed up tired troops
Hey, Mac fanbois. HGST wants you drooling over its HUGE desktop RACK
What vast digital media repository could possibly need 64 TERABYTES?
Apple Pay is a tidy payday for Apple with 0.15% cut, sources say
Cupertino slurps 15 cents from every $100 purchase
Apple's ONE LESS THING: the iPod Classic disappears
RIP 2001 – 2014. MP3 player beloved of millions. Killed by cloud
prev story


Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.