Feeds

Virtualisation, the Linux way

De facto standard

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

IBM, Hewlett-Packard and Sun Microsystems, among others, are creating an imperative. Their infrastructure initiatives, entitled respectively; On Demand, Adaptive Enterprise and N1, are all quite similar and aimed at the idea of virtualising the hardware layer, writes Robin Bloor of Bloor Research.

The primary reason for wanting to virtualise hardware is this; in the last five years or so companies have been buying servers in an ad hoc manner, tending to deploy them on a one server per application basis.

Consequently, they assembled server farms which turn out to have an average hardware utilisation of about 20 per cent. This is, of course, a waste of money and, in the long run, a management headache. However there are other imperatives, particularly the idea of being able to provide infrastructure as a service - dynamically, i.e. you pay for what you use and you get what you need when you need it.

So companies, especially large companies, are very receptive to the idea of corporate computer resource that is both managed and efficient - which is what IBM, HP and Sun are talking about. However, if you talk the talk you are also going to have to walk the walk, and right now, what can be delivered doesn't amount to wall-to-wall virtualisation - or anything like it.

So the question is: How is it ever going to be delivered - given legacy systems, existing server farms and the enormous difficulty involved in relocating applications in a heterogeneous network.

Blade technology, grid computing, automatic provisioning, SANs, NAS and so forth will play a part in this, but for it to work, and work well, it will require a standard OS - and there is only one candidate - Linux.

The easiest way to see the need for a standard OS is to consider why and how TCP/IP became a standard. It didn't happen because it was the best option or because it was purpose designed to run a world-wide network with hundreds of millions of nodes (it wasn't). It happened because it was the only reasonable choice at the time. The same is now true of Linux as regards hardware virtualisation. Irrespective of its other qualities, it is the only one that fits the bill.

It qualifies because it spans so many platforms - from small devices up to IBM's zSeries mainframe. It also qualifies because, like TCP/IP, it doesn't actually belong to anyone. It runs on most chips and is rapidly becoming the developer platform of choice. So the idea is starting to emerge that you virtualise storage by the use of SANs and NAS and you virtualise server hardware by the use of Linux - thus making it feasible to switch applications from one server to another automatically, and quickly. Within this capability you can cater for failover and make highly efficient use of resources.

This doesn't solve all the problems of virtualisation - and there are many, including legacy hardware that will never run Linux and legacy applications that will never run on Linux. But this doesn't actually matter. In the short run they'll get excluded from virtualisation and in the long run, they cease to exist.

The momentum is building and Linux is set to become the standard OS for hardware virtualisation in large networks. Other OSes may eventually have to impersonate the characteristics of Linux or move aside.

© IT-Analysis.com

Related stories

McDATA sets out its virtual stall
StorageTek signs FalconStor for data pooling
Sun repositions first plank in N1 strategy
IBM overhauls Tivoli
Virtualization sells better as something else, HP says

Secure remote control for conventional and virtual desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Driving business with continuous operational intelligence
Introducing an innovative approach offered by ExtraHop for producing continuous operational intelligence.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.