Original URL: https://www.theregister.com/2008/12/16/the_year_in_oses/

The Year in Operating Systems: No battle of big ideas

Small change for 2009

By Timothy Prickett Morgan

Posted in OSes, 16th December 2008 18:47 GMT

In a mature IT market, it becomes hard to make any significant changes in hardware architecture or software design without upsetting the installed base of legacy users.

This, of course, makes the evolution of a product somewhat troublesome. Change must fit within the strict confines of compatibility, ensuring both hardware and software vendors do something useful without upsetting the entire apple cart in the data center - or on our desks and in our laps.

To be sure, this is a lot less exciting than having a totally new thing come along, as proprietary minis did in the late 1970s, commercialized Unix did in the mid-1980s, and a decent Windows operating system for desktops and Linux for supercomputers and then regular servers did in the mid-1990s.

These kinds of tectonic shifts are very difficult to imagine in operating systems these days, thanks to the internet where no one particular machine or its operating system is the center of gravity for users and developers.

That is not to say that there isn't a lot of underlying infrastructure in operating systems that cannot be and must be improved. Just to take one example, the advent and mainstreaming of virtual machine hypervisors for Linux and Windows boxes in recent years is about gaining efficiencies in the data centers.

Hypervisors allow for sophisticated, flexible, and efficient distributed computing by cramming many virtual machines and their workloads onto a single physical server. They don't, though, change the nature of computing all that much.

Similarly, network connectivity for servers, desktops, laptops, and other devices is a key attribute of any operating system these days. A lot of work has gone into making wireless and other network connectivity easier for personal devices and server operating systems have been tweaked with improved networking stacks to take advantage of the fastest network gear the industry can deliver.

If there is one prevailing thing that all kinds of end users desire, whether they are in the data center, in cubicles, or at home: They want operating systems that are easier to use. And operating system makers - be they commercial entities or open source software projects - are all trying to do that with better user interfaces, more graphical tools, and automation wherever possible.

Think of how much easier it is to link to a wireless network in Linux today than it was only a few years ago, just as an example.

Given where we are in terms of market maturity and the work that remains to be done, it's worth looking back at 2008 to measure what really changed in the world of operating systems. Also, it pays to look ahead at what vendors have lined up for us in the coming twelve months.

Windows: virtually finished

One of the key launches this year was the Longhorn edition of Windows for servers, now known as Windows Server 2008. This shipped in February after delays over many years, not to mention having much of the guts that were supposed to be in Longhorn removed rather roughly by Microsoft's managers.

Windows Longhorn was first conceived in 2001 as a minor update of the Windows XP client operating system that would ship in 2002. Late in 2002, as the company was struggling to get Windows Server 2003 (then known as Windows .NET Server 2003) out the door, Microsoft decided it would develop a server version of Longhorn as well.

Longhorn was conceived as a stop-gap and minor kicker to Windows XP on the client side that was supposed to ship in 2002 and delays in getting Windows Server 2003 out the door in late 2002 compelled Microsoft to make Longhorn an interim server release to ship maybe in 2005.

By the summer of 2004, Longhorn got pushed to 2006 and to keep to that development schedule, Microsoft said it would have to gut its Windows File System (WinFS) from the operating system. As we now know, even cutting WinFS didn't help, and by 2005 Longhorn was pushed to 2007.

Next there were difficulties with the "Viridian" hypervisor Microsoft was creating for Windows, which we now know as Hyper-V and that pushed Longhorn from the end of 2007 to early 2008. And then Hyper-V didn't get released to manufacturing until July - and that was without key features, such as live migration of virtual machines in Hyper-V and Systems Center Virtual Machine Manager, the tool that, as the name suggests, manages virtual machines and their resource allocation.

Had Longhorn Server come out as conceived, with a sophisticated, embedded, and relational data storage file system as well as a virtualization hypervisor, it would be easy to argue that Windows Server 2008 was the most important server operating system launch this year.

It is ironic, though, that IBM's System/38 minicomputer - launched 30 years ago - had a native, embedded relational data store for all files and data, an approach that was further perfected in the 1988 launch of the AS/400.

IBM, being forward thinking, ripped this database out of the operating system and propped it up atop the OS/2 High Performance file system in a reworking of OS/400 in 1995, basically moving in the reverse direction of what Microsoft was attempting and making the AS/400 more complex to use but better able to deal with ASCII files than its EBCDIC data base could do.

Maybe next time Microsoft can set up a WinFS development lab in Minnesota and hire some ex-IBMers who actually know how to do this right?

And of course, Windows Vista Service Pack 1 came out at the same time as Longhorn and with a kernel that had been merged with the server variant, simplifying Microsoft's software development efforts but giving users some headaches because of bugs in SP1.

Vista itself was pretty annoying in that it really required more computing resources than Microsoft said it needed, and it was no surprise at all that plenty of users were willing to stay with Windows XP. Or maybe even go totally nuts and contemplate using Linux - particularly on a new machine, like a netbook. SP2 for Vista, which is in beta now, is probably not going to change how many people feel about Vista.

March of the penguins

Speaking of Linux, the operating system must be maturing, because its development cycle is starting to have the predictable cadence of a Unix platform from the 1990s. While there are dozens of different versions of Linux the three that mattered most for servers were still largely under the control of Red Hat, Novell, and Canonical - namely Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu.

We can argue whether or not Linux mattered all that much on desktops this year. I think netbooks proved there is a place for Linux where users can see it, and there are certainly plenty of phones and other hand-held devices where you don't really see Linux that were eagerly snapped up.

In the case of Red Hat and Novell, there are development versions of the code, managed by the respective Fedora and openSUSE projects. This code is grabbed, hardened, and tested for application compatibility and then distributed as RHEL or SLES. So what did they deliver?

RHEL 5.2 shipped in May, and the beta of RHEL 5.3 was released at the end of October, possibly for shipment sometime around January 2009. RHEL 5.2, based on the Linux 2.6.18 kernel, doubled the core count supported in a server to 64 cores and pushed main memory support up to 512GB in a single image.

The Xen hypervisor inside RHEL 5.2 was updated to be able to see the NUMA-style motherboard clustering used in big iron servers, which allows a virtual machine to span more than a single processor socket. Red Hat had added support for Advanced Micro Device's quad-core Opterons as well as Intel's still future octocore processors codenamed Nehale - (due in maybe late March 2009) way back in RHEL 4.6.

Interestingly, Red Hat's implementation of the Xen hypervisor embedded inside RHEL ran on Intel's Itanium processors as well as x64 iron. The company is in the midst of shifting from Xen to KVM for virtualization. The latest Fedora 10 delivered KVM as the default hypervisor, with Xen as an option. It won't be until RHEL 6 is launched later in 2009, though, that KVM is the default on the commercial versions of Red Hat's Linux.

RHEL 5.3 will support up to 126 processor cores in the hypervisor (not 128, unless the release notes have a typo) and up to 1TB of main memory. The update will also sport "extended support" for Intel's six-core Dunnington family of Xeons and the future Nehalems. It will also sport a technology preview of the ext4 file system, iSCSI boot, and 32-bit paravirtualized operating system guests on 64-bit x64 hosts.

Novell delivered SP2 for its SLES 10 server and its SLED 10 desktop variants in May, and as in the past, the company tried to get the jump on Red Hat in terms of the level of the Xen hypervisor it embedded in its Linux variant.

Thanks to its partnership with Microsoft, Novell was also bragging that SLES 10 SP2 was the only Xen-based hypervisor with support for Windows Server 2003 and Windows Server 2008 as a guest. And now that Hyper-V is available, Novell also claimed it was the only implementation of Xen that offered "bi-directional compatibility" between Linux and Windows for guest operating systems. Presumably that meant live migration back and forth between those two platforms of those guests.

Novell has been mum about endorsing KVM as a hypervisor, but since KVM has been mainstreamed into the Linux kernel, such an endorsement is not a big deal technically. Microsoft had a partnership with XenSource (now a part of Citrix Systems) to create Hyper-V and ensure its compatibility with Xen, and it is not clear what might be involved to get SLES and Hyper-V to be compatible with KVM.

Clearly, if KVM takes off, all of the work that Novell and Microsoft have done to be Xen friendly will have gone largely to waste.

The march continues

Only two weeks ago, Novell's hybrid NetWare-Linux operating system, Open Enterprise Server, was updated with the SP2 patch. Looking ahead, SLES 11 is expected some time in the first half of 2009, and presumably based largely on the openSUSE update rumored to be coming out later this week.

Novell has not said much about SLES 11, except that it will use the Linux 2.6.27 kernel and the Xen 3.3 hypervisor. SLES 11 is expected to have the OpenAIS cluster communication protocol for server and storage clustering as an alternative to the Oracle Cluster File System 2 already in SLES 10. Also planned is the OpenFabrics Enterprise Distribution (OFED) software stack, which will provide open-source drivers for Ethernet and InfiniBand networks that implement the Remote Direct Memory Architecture (RDMA) protocol for more efficient communication between machines.

Furthermore, SLES 11 will have support for distributed replicated block device (DRBD). This is like RAID 1 mirroring for storage devices at the network abstraction level instead of at the array level down inside the server or storage system.

Canonical's Ubuntu was updated in late October with the 8.10 release, or Intrepid Ibex. Ubuntu is aimed at developers and other Linux enthusiasts as well as end users looking for an alternative to Windows so this release focused on a problem area for Linux: WiFi and 3G network connectivity.

Ubuntu 8.10 was based on the Linux 2.6.27 kernel and one of its development, not long-term support, releases. Canonical spins desktop and server editions of Ubuntu, and the server edition of 8.10 recommended using KVM for what the company calls "single host server virtualization."

For data center-class virtualization, Canonical recommendded using VMware's ESX Server or Citrix Systems' XenServer, which have snapshotting, live migration, and other high availability features that KVM does not yet have.

Canonical has been mum about what it will do to support Hyper-V thus far, other than to say that it is not currently in the plans.

Ubuntu 9.04, the Jaunty Jackalope release, is expected around April 2009 and the first alpha release came out at the end of November. Boot and overall system performance improvements are at the top of the list of features developers are putting together for this release.

Interestingly, Ubuntu 9.04's desktop variant is expected to run on ARM-based netbooks and to have a bunch of power management features to keep networks and processors from burning so much juice.

Unix break out, or bottled up?

Unix is essentially - and embarrassingly - dead on the desktop, so Unix operating system makers have the luxury of only having to worry about servers. Of course, if IBM and Hewlett-Packard had ported their respective AIX and HP-UX Unixes to x64 iron and then kept pace with x64 and related graphics enhancements, their workstations business might have not died off.

Then again, Sun Microsystems still claims to be a workstation vendor, and it has re-embraced x64 processors and it doesn't really have much of a workstation biz. This is one of those cases where Windows and Linux just seem to win.

IBM's AIX saw very little development action this year, with version 6.1 having been delivered in November 2007 - a few months after the initial Power6-based servers hit the market. AIX 6.1 featured tweaks to take advantage of the Power6 iron, including the new decimal and AltiVec math units on the chip, and also has a substantially reworked hypervisor, now called PowerVM. It has had many names.

In September this year, IBM created an Enterprise Edition of AIX 6.1, which included what used to be an add-on to provide workload partitions (WPARs) in addition to logical partitions (LPARs). WPARs are akin to virtual private servers, which have a shared kernel and file system but which look like separate AIX instances as far as system admins and applications are concerned. WPARs are similar to Sun's containers for Solaris. LPARs are akin to virtual machine partitions, and they run whole AIX instances with their own kernels and file systems.

AIX 6.1 Enterprise Edition also included a tool called Workload Partition Manager, which allows workloads to be live migrated around AIX boxes on a network, and a bunch of Tivoli provisioning tools. Basically, if you get Enterprise Edition, the pricing works out that IBM is giving away the Tivoli tools for free.

IBM has been mum about future AIX development, except to admit that AIX 6.2 and AIX 7 are coming down the pike.

The main changes that HP made this year have to do with packaging as well. In April, HP created four different packages of HP-UX 11i v3 concurrent with the first update of that operating system, which was initially launched (and late, I might add) in November 2007.

Now there is a base edition, a virtual server edition, a high availability edition, and a data center edition that includes the whole shebang. Basically, HP stripped out its nPar and vPar virtualization into a distinct edition and made it possible for customers to do high availability clustering without having to take everything in the stack.

The updates to HP-UX 11i v3 this year also allowed for PA-RISC and Itanium machines to host earlier 11i v2 instances inside vPar partitions. Up until now, vPars had to have all operating systems at the same level on the box. HP-UX 11i v3 already supports the forthcoming quad-core Itanium processors codenamed Tukwila, so HP doesn't have to do this with a future update.

(Open)Solaris

The other big Unix, of course, is Sun's Solaris and OpenSolaris pairing. And the big news for Sun was, of course, getting the first and then a second release of OpenSolaris out the door based on the code created and maintained by the OpenSolaris community.

Unlike IBM and HP, Sun has open sourced its Solaris Unix and is trying to emulate the development and support style of the Linux community. To be specific, OpenSolaris is most like Canonical's Ubuntu in that customers can buy tech support for the development as well as commercial releases. Canonical, incidentally makes no distinction at all, and it is likely Sun will just have something called OpenSolaris 2009 at some point and be done with it.

Like the Linux distros, Sun is trying to make it easier for companies to spin their stacks of the operating system, middleware, and application software and then distribute these across the servers in their networks.

Sun's business plan is to make Solaris as easy, open, and modern - meaning updated on a six-month cycle - as a development Linux with the full support you can get from a commercial Unix or Linux distro.

While Sun distributed millions of freebie copies of Solaris 10 and now OpenSolaris, it remains to be seen whether the change from a closed-source, fee-based to an open-source operating system paid for using support contracts will pan out. It is not clear whether the change will not only protect the $1 billion-plus in Solaris licensing and support revenue, but also extend it and - here's the kicker - extend it profitably.

On the technology front, Solaris (whether it is open or compiled) is still well-regarded, and Sun tried to make lots of different kinds of hay out of the pairing of Solaris with its Zettabyte File System (ZFS).

ZFS has been given root file system status in Solaris now - an indication that Sun believes ZFS is absolutely ready for prime time - and Sun plus a handful of other storage makers spent their time creating solutions that pair Solaris and ZFS to make storage devices. This business, though, still does not yet contribute enough in terms of sales to Sun to make up for declining prices on midrange and large Unix iron and a shift away from Sparc to x64 processors from Intel and AMD by some of Sun's customers.

Given its customer base, its technology portfolio, and the nature of the systems market today, it is hard to imagine Sun coming up with a better strategy than it did.

The kinds of things Sun is doing with its operating system - supporting x64 iron enthusiastically, using a Linux-style development, distro, and support model, and such - are the correct things. However, Sun needed to do these things a decade ago as the Linux genie was coming out of the bottle. Linux doesn't fit into that bottle any more, and the real question you have to ask is this: will Linux stuff Solaris into that bottle?

Duck and cover

There are, of course, other operating systems in corporate computing that still matter but they do not just move at a glacial pace. They seem to be frozen in time.

IBM's mainframe operating systems - z/OS, VSE, and VM - as well as its midrange proprietary operating system - i 6.1, formerly known as i5/OS and before that, OS/400 - still drive billions of dollars in hardware sales for the company, and in the case of the mainframe operating systems, drive billions of dollars in annual licensing fees for those operating systems and their related middleware stacks.

These are great businesses for IBM to support, delivering sales and profits, but innovation comes at a slow pace. In fact, most of the innovation that these IBM operating systems have seen over recent years was to tweak them to run on more scalable hardware (more processor sockets and cores), to support 64-bit memory (which the AS/400 got back in 1995, the mainframe in 2001), or to be propped up on hypervisors that allowed alternative operating systems to run beside them on the same iron.

Ditto for HP's OpenVMS and NonStop platforms, which have been ported from their respective Alpha and MIPS platforms to HP's Integrity servers.

But actual change deep inside the operating system, aside from new hardware and scalability enhancements, is not really coming at a fast pace. When you are running legacy applications on legacy operating systems, this is what customers want: as little change as possible.

This was probably something Microsoft began to realize this year, as it tried to move beyond Windows XP to Windows Vista and started looking ahead to Windows Vista's successor: Windows 7. ®