Feeds

When is an operating system not an operating system?

Challenging traditional ideas

  • alert
  • submit to reddit

Secure remote control for conventional and virtual desktops

Reader workshop It used to be the case that the role of the operating system (OS) was pretty well defined as a layer of software responsible for controlling the use of and access to physical machine assets such as CPU, memory, disk, network, and so on.

As the industry has evolved, however, so too has the role of the OS. Today, for example, when you install an OS, a whole range of high-level features, functions and tools often come along with it, from enhanced security and access, through various management and admin tools, to full-blown application and web serving.

While this much more comprehensive and coherent approach to delivering platform capability has made many aspects of the lives of IT professionals much easier, the gradual 'raising of the water line' in terms of what's included in the OS creates some interesting discussions when we move into the world of virtualization.

There are a number of dimensions to this.

Firstly, there is the question of efficiency. One of the big advantages of virtualization is being able to run multiple workloads on the same box, with each supported by an appropriately configured software stack running in a discrete virtual machine (VM). If each virtual machine is required to run a general purpose OS, even though that VM is essentially single purpose in nature, that arguably represents unnecessary complexity and overhead that needs to be resourced and managed.

Using 'leaner' versions of operating systems, which is now a possibility with both Linux and Windows Server, for example, supports the notion of building simpler and more efficient stacks when the job at hand is very specific. The counter-argument to this, however, is that consistency has its advantages, and that implementing too many OS variants creates a different set of complexities and management issues. Provided unused functionality is not consuming an excessive amount of resource, perhaps it's better to live with it.

That is, of course, not the whole picture, as unnecessary services sitting there just idling can increase the attack surface of an operating system from a security perspective, so there is clearly a balance to be struck in terms of strip down and/or configuration.

The efficiency argument also comes into play when considering the way in which hypervisors are implemented. Intuitively, running a stand-alone hypervisor on 'bare metal', i.e. directly on the hardware, would seem to be the best option from a performance perspective. Some argue, however, that there is little or no practical difference in performance between this and having the hypervisor sitting on top of (or embedded within) a host operating system.

But again we need to consider the management dimension. Bare metal hypervisors represent independent entities in the infrastructure that need to be managed as such, which is why some recommend dedicated management tools for the virtualized environment. Hosted hypervisors can often be managed via the operating system upon (or within) which they sit, allowing at least a basic level of management to take place via the tools and processes already being used, with more capability coming from extension rather than duplication of management solutions.

Unfortunately, there are no black or white outcomes to any of the above discussions, and the choices people make often come down to context, familiarity and philosophy.

If you are a smaller IT shop manned predominantly by multi-functional staff, then the embedded or hosted route might make sense because it is relatively straightforward and more likely to fit with what you are doing already. If you are lucky enough to have a lot of specialist resource, as is typical of larger enterprise IT environments, and areas of your infrastructure that are totally virtualized, then a finely tuned bare-metal approach with dedicated management tools might be more appropriate.

Even this is a generalisation though, as options can be mixed and matched, sometimes easily, sometimes less so, based on specific need.

With this in mind, we would be interested in what you, the readers, think on this topic. What, in your experience, are the pros and cons of bare-metal versus hosted hypervisors? What are the performance and management implications, for example? And have you developed a philosophy that is being used as the basis for your virtualization investments and initiatives?

Coming back to where we started, perhaps you even regard the bare-metal hypervisor as the operating system of the future? And to throw one last idea into the mix, do you see a role for so called 'application virtualization', whereby applications are captured in a container that plugs onto a hypervisor?

Let us know what you think in the comments section below.

Internet Security Threat Report 2014

More from The Register

next story
Bada-Bing! Mozilla flips Firefox to YAHOO! for search
Microsoft system will be the default for browser in US until 2020
Download alert: Nearly ALL top 100 Android, iOS paid apps hacked
Attack of the Clones? Yeah, but much, much scarier – report
Be real, Apple: In-app goodie grab games AREN'T FREE – EU
Cupertino stands down after Euro legal threats
SLURP! Flick your TONGUE around our LOLLIPOP – Google
Android 5 is coming – IF you're lucky enough to have the right gadget
Microsoft: Your Linux Docker containers are now OURS to command
New tool lets admins wrangle Linux apps from Windows
Facebook, working on Facebook at Work, works on Facebook. At Work
You don't want your cat or drunk pics at the office
Soz, web devs: Google snatches its Wallet off the table
Killing off web service in 3 months... but app-happy bonkers are fine
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Reducing the cost and complexity of web vulnerability management
How using vulnerability assessments to identify exploitable weaknesses and take corrective action can reduce the risk of hackers finding your site and attacking it.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.