Will we stop talking about virtualisation?
Enough of the V-word, already
Lab Well, it’s been fun, but we’re starting to draw this virtualisation lab series to a close. Over the next couple of weeks we’ll be wrapping things up, tying things down and otherwise leaving things neatly parcelled.
While of course it is a free society, so you don’t have to pay any attention to the stream of editorial on this subject, some of you may be bored to the hind teeth of hearing about virtualisation. In this article we consider that very question – when will we finally stop talking about it?
To find the answer we need to dig a little beneath the surface. Virtualisation has come to the fore largely through the seemingly insatiable demand for x86 server consolidation, but as we know from this lab and elsewhere, its appeal is broader than that. Storage virtualisation has a longer heritage as a technology for example, and all the signs are that server virtualisation and storage virtualisation need to work in tandem to get the best out of each, although we know that understanding of the later lags well behind.
Meanwhile we have desktop virtualisation, which is more of an umbrella term for a range of technologies – application and graphics streaming, client-side hypervisors and so on. Server virtualisation options are also moving beyond the hypervisor model, and no doubt we shall see application streaming becoming an option on servers as well as desktops, for example.
Virtualisation has already emerged in other areas of IT. Virtual machines can be run on mobile devices, and application server software (such as that from Oracle/BEA) has also boasted offering virtual environments for their own workloads. Another area of potential is in the embedded systems space.
In other words, virtualisation is only going to become more prevalent and complicated, until it touches every area of IT. There will be more options for more scenarios across and between more platforms. From that point of view it will continue to be a topic of some interest. But at the same time, if virtualisation really does become part of absolutely everything, this could also mean that it becomes so commonplace, that it will barely merit mentioning as a separate entity.
“Oh, but hang on,” says a little voice at the back of my head, taking me right back to Winkel and Prosser’s Art of Digital Design. “IT isn’t real anyway, it’s all about electronic signals, right?” The little voice has a point – indeed, IT has long been about how good we are at abstracting computational tasks and data movements, from the physical hardware required to do the job. Mainframes got in early with virtualisation of course, and virtual memory has been a necessity ever since Bill Gates didn’t say “640K is enough for anybody.” Oh, and when was the last time anyone directly accessed bits in a storage system anyway?
So, if IT has always been about abstraction, it makes sense that even as we do more virtualisation, we’re going to be talking about it less. Abstraction is a means to an end – it only makes sense to package things in a way that supports the information and services to be delivered. A philosophical point perhaps, but one which gives us the fundamentals around cohesion and coupling which should still be the mainstay of good software development practice. It also provides the basis for best practice around service oriented architecture and business service management.
Ultimately, virtualisation gives us the opportunity to think about what IT does, in terms of workloads, information and service delivery, without having to use up as much time worrying about what IT is in hardware terms. We know from workshop feedback that these are early days, and it is premature to ignore the very real demands of hardware in terms of RAM or network bandwidth for example.
Perhaps however the reason we will stop talking about virtualisation will ultimately be because there are more interesting things to discuss. If you have any thoughts on just what those might be (so we can kick off the next bandwagon here and now), they’d be very welcome. ®
How about programming for scalability (aka "Can we stop using Apache yet")?
Virtualisation is all well and good, but I can't quite shake the feeling that sometimes, the very fact that you *can* consolidate multiple "servers" to run on the same piece of tin is because those pieces of tin weren't being pushed to the limit to start with. If that's because you just aren't asking much of those boxes, that's fine. But if it's because the software on those boxes can't take advantage of the hardware available to it, you need better software.
Why, for example, are most of us still running webserver software in which the overhead for an open-but-idle connection isn't completely, utterly trivial? From the Apache 2.2 docs for the "event" MPM: "However, Apache traditionally keeps an entire child process/thread waiting for data from the client, which brings its own disadvantages. To solve this problem, this MPM uses a dedicated thread to handle both the Listening sockets, and all sockets that are in a Keep Alive state." Sounds fine, but the event MPM is still considered "experimental".
network virtualization too
Network virtualization is starting to take off as well. Companies like Juniper and Extreme have had layer 3 virtual routers in their gear for many years. I recently noticed force10 added functionality similar last year. Brocade added virtual functionality to their new ADX load balancers last year, I'm told F5 has something similar (yet undocumented?) in their latest bleeding edge code.
Then there is technology like HP VirtualConnect which turns a 10GbE port into 4 flexible virtual NICs(was talking with a friend from Broadcom today and he believes it is based on their technology which provides 4 layer 2 functions per 10GbE port, and I did confirm that the Flex10 NICs are Broadcom).
Still have a ways to go before network virtualization is as dynamic or flexible as storage virtualization or server virtualization but it'll get there eventually.
my required title...
I couldn't agree more. We should be pushing the limits of the processing we have available.
If you want to use virtualization to manage testing or host legacy systems, that's one thing, but if anyone thinks it is a "solution" for anything new, they aren't looking at the problem the right way. Sure, I can dig a subway system with an army of people with shovels, but it is MUCH faster/better/cheaper to use a tunnel boring machine. Get it? Good. Apply that thinking to software.