Feeds

Virtualisation, the Linux way

De facto standard

  • alert
  • submit to reddit

Internet Security Threat Report 2014

IBM, Hewlett-Packard and Sun Microsystems, among others, are creating an imperative. Their infrastructure initiatives, entitled respectively; On Demand, Adaptive Enterprise and N1, are all quite similar and aimed at the idea of virtualising the hardware layer, writes Robin Bloor of Bloor Research.

The primary reason for wanting to virtualise hardware is this; in the last five years or so companies have been buying servers in an ad hoc manner, tending to deploy them on a one server per application basis.

Consequently, they assembled server farms which turn out to have an average hardware utilisation of about 20 per cent. This is, of course, a waste of money and, in the long run, a management headache. However there are other imperatives, particularly the idea of being able to provide infrastructure as a service - dynamically, i.e. you pay for what you use and you get what you need when you need it.

So companies, especially large companies, are very receptive to the idea of corporate computer resource that is both managed and efficient - which is what IBM, HP and Sun are talking about. However, if you talk the talk you are also going to have to walk the walk, and right now, what can be delivered doesn't amount to wall-to-wall virtualisation - or anything like it.

So the question is: How is it ever going to be delivered - given legacy systems, existing server farms and the enormous difficulty involved in relocating applications in a heterogeneous network.

Blade technology, grid computing, automatic provisioning, SANs, NAS and so forth will play a part in this, but for it to work, and work well, it will require a standard OS - and there is only one candidate - Linux.

The easiest way to see the need for a standard OS is to consider why and how TCP/IP became a standard. It didn't happen because it was the best option or because it was purpose designed to run a world-wide network with hundreds of millions of nodes (it wasn't). It happened because it was the only reasonable choice at the time. The same is now true of Linux as regards hardware virtualisation. Irrespective of its other qualities, it is the only one that fits the bill.

It qualifies because it spans so many platforms - from small devices up to IBM's zSeries mainframe. It also qualifies because, like TCP/IP, it doesn't actually belong to anyone. It runs on most chips and is rapidly becoming the developer platform of choice. So the idea is starting to emerge that you virtualise storage by the use of SANs and NAS and you virtualise server hardware by the use of Linux - thus making it feasible to switch applications from one server to another automatically, and quickly. Within this capability you can cater for failover and make highly efficient use of resources.

This doesn't solve all the problems of virtualisation - and there are many, including legacy hardware that will never run Linux and legacy applications that will never run on Linux. But this doesn't actually matter. In the short run they'll get excluded from virtualisation and in the long run, they cease to exist.

The momentum is building and Linux is set to become the standard OS for hardware virtualisation in large networks. Other OSes may eventually have to impersonate the characteristics of Linux or move aside.

© IT-Analysis.com

Related stories

McDATA sets out its virtual stall
StorageTek signs FalconStor for data pooling
Sun repositions first plank in N1 strategy
IBM overhauls Tivoli
Virtualization sells better as something else, HP says

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
Intel, Cisco and co reveal PLANS to keep tabs on WORLD'S MACHINES
Connecting everything to everything... Er, good idea?
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.