This article is more than 1 year old

Part 3: Docker vs hypervisor in tech tussle SMACKDOWN

We see you by the virty containers, VMware

Comment If you're willing to start from scratch, give up high availability, the ability to run multiple operating systems on a single server and all the other tradeoffs then Docker really can't be beaten. You are going to cram more workloads into a given piece of hardware with Docker than with a hypervisor, full stop.

From the perspective of a cloud provider – or an enterprise large enough to run like one – that's perfectly okay. Many of the workloads they run don't need have a lot of the fancier hypervisor-based goodies anyway. They use in-application clustering, or applications that have been recoded for public cloud computing.

You're not vMotioning around AWS, and given that it has taken VMware until about the middle of 2015 to get a production version of lock-step fault tolerance with more than one vCPU out the door, don't be expecting that on a non-VMware public cloud provider any time soon. (You can also bet VMware is going to charge a pretty penny for it.)

It's in the tech

Hypervisors are marvels of advanced infrastructure with tools, techniques and capabilities that containers like Docker may never be able to match. How, exactly, do you move a containerised workload between servers with dramatically different kernel versions or different hardware without adding a hypervisor-like layer of abstractions?

How will containers scale over time as the existing generation of servers live side-by-side with the next and workloads get transitioned? There are a lot of unanswered questions, and it will be years before we're sure how containers will fit in the overall technology puzzle.

The flip side is that Docker is in a lot of ways more responsive to customer needs than most hypervisors out there. Not only are the core developers still passionate about "their baby", but the community that has sprung up around it borders on evangelically religious. There is no end of energy for all things Docker, and those with the energy are among the brightest minds we've ever produced.

Docker is easy to use, though not because of management tools, APIs, documentation or slick marketing. Docker is easy to use because the sorts of workloads you want to run on Docker come as a push-button-simple deployment from an "app-store" like interface.

Some of these "app packages" are officially supported. Most are community driven. But they are easy to find, easier to use and easy to repackage into a template and go from your single testbed to a cloud of thousands of instances.

Hypervisor vendors can be reasonably viewed as being obsessed with providing the most flexible and resilient infrastructure possible. To contrast, Docker is the evolution of an obsession with not wanting to manage or maintain infrastructure, but instead to get on with the business of actually using that infrastructure to do something useful.

Containers will ultimately be used in tandem with traditional hypervisors and even public cloud services, forming a new(ish) basic service offering to the masses and allowing us to do something we've done for some time that little better.

And versus or

Make no mistake, however, like the public cloud before them, containers are an "and", not an "or" technology. Hypervisors were an "or" technology. Wherever hypervisors went, they massively displaced bare-metal computers and rendered then endangered, if not extinct. Containers are not – I repeat emphatically not - that disruptive.

Docker may be snapped up by a tech titan. Given VMware's interest in the company, it may well find a home inside VMware. Similarly, containers may ultimately be (or, in my experience, rather frequently are) deployed inside hypervisors.

This lets administrators use the hypervisor underneath to achieve the sorts of underlying infrastructure wizardry at which hypervisors have excelled, while putting some or all workloads that require a similar environment into a single VM as a means of boosting efficiency. A compromise, if you will.

Containers may end up deployed on some metal servers for certain classes of workloads while others live on hypervisors for decades to come. Right now, there's just no way to know exactly how it will play out. *

Personally, I expect it will be different for each organisation. Containers offer us the ability to make choices about risk versus efficiency. Hypervisors help us deal with risk: the risk of cramming so many workloads into a single system, the risk of a system failure, or of needing fully fault tolerant capabilities.

Hypervisors help us deal with the realities of heterogeneous environments and the fact that not everything is a web-based workload, or ready to be recoded for "scale".

30-year old software still runs in many of the world's organisations today, and there's absolutely no reason to expect the same won't be true 30 years from now. Hypervisors give us the ability to deal with that in a way that containers never will. ®

* If history is any guide VMware probably has a skunkworks operation working on building a Docker competitor directly into the hypervisor right now. VMware doesn't do "partnership" in these areas for particularly long before it either buys up a relevant company or rolls its own version. These spats can get particularly nasty.

Microsoft's rapid acceptance of Docker has muddied the crystal ball a bit, and made the pressure on VMware all the more intense. ®

Want more?

Read part 1, 2 and the final instalment of this four-part series:

Part 1

Part 2

Part 4

More about

TIP US OFF

Send us news


Other stories you might like