Red Hat and dotCloud team up on time-saving Linux container tech

First they conquered the seven seas, now containers be forcin' VMs to walk the plank

Reducing the cost and complexity of web vulnerability management

Red Hat is working with startup dotCloud to co-develop new Linux container technology to make it easier to migrate applications from one cloud to another.

The partnership was announced by the companies on Thursday, and will see dotCloud's open source "Docker" Linux-container technology get enhancements to work with Red Hat's OpenShift platform-as-a-service technology, as well as the removal of Docker's dependency on the Advanced Multi Layered Unification Filesystem (AuFS).

In addition, Docker will be packaged for the Fedora project, broadening the technologies it can work with. Eventually, the tech will optionally support some Red Hat-specific Linux components such as SELinux and libvert-lxc.

"We've had over 20 people from Red Hat contributing to code for our upcoming release," Ben Golub, chief of dotCloud, tells El Reg. "This [collaboration] enables Docker to work out of the box with all of the Red Hat family of Linux, which is clearly one of the most important platforms for us to support."

Docker is a containerization technology that builds on the lxc (Linux Containers) component of the Linux kernel, and simplifies accessing the technology for developers.

It uses lxc's techniques for namespace isolation and control groups, and builds on this with technologies for bundling applications and app dependencies so they can be flung from machine to machine.

Lxc is geared toward the creation of lightweight fast-boot servers that don't need much RAM, and Docker adds to this to let it work for the deployment of full applications. It comes with a build tool that lets developers assemble a container from their source code while using popular tools such as Maven, Chef, Puppet, Salt, and so on.

Just as virtualization lets you gin up multiple virtual machines on a single chunk of hardware, containerization lets you virtualize the operating system atop the Linux kernel and then have apps running on that. The strength of this is that each app will be running on exactly the same version of the underlying Linux OS, which frees up resources and enhances predictability.

"Here what you're trying to do is say 'Here is a tenant of the operating system', and this tenant can have multiple apps and processes," Xen-hypervisor pioneer and current CTO of security startup Bromium, Simon Crosby, tells El Reg. This lets "multiple independent tenants have tenant notions of isolation," he says.

This approach beats VMs in terms of resource utilization, as the OS copy is shared across all apps running on it, whereas virtual machines come with the abstraction of separating each OS onto each VM, which adds baggage.

"In the case of lxc containerization, the goal is to provide a more memory-efficient way to deliver multi-tenancy," Crosby says. "What it gives you is a far more efficient mechanism if all your tenants want to use the same version of the OS."

This means that admins will need to shut down containerized apps when they upgrade the underlying kernel, though this happens infrequently enough that it may be acceptable to those willing to accept a bit of downtime.

It also means Docker isn't distribution-specific. "We just rely on the kernel," Golub says. "As a result, you can upgrade to the latest version of Fedora or Ubuntu and soon RHEL without having to change your containers."

Depending on exactly the same kernel does mean that if a problem occurs it could affect all of your apps, and the same is true of a security vulnerability. However, the work done by the Linux community and Docker's use of underlying technologies like SELinux should protect admins against this.

"I think containers are much more lightweight and have much lower overhead than virtual machines. A container basically provides isolation but runs everything inside the host's operating system so it's much lighter weight," Golub says. "While virtual machines are a great tech, they're not particularly good for iterative development. Not particularly good for migrating across clouds.

The one criticism leveled by the community against Docker is that it is repackaging a bunch of underlying Linux services, and gaining lots of attention for work done by the wider community.

Golub feels this is a bit unfair as the company has made technologies like lxc "significantly easier to use," while working to ease the upgrade process and spawning the Docker registry.

"The most important thing is we've standardized the way containers can be used, so it's now very easy to make them portable across different systems," he says.

With the Red Hat collaboration, developers now have another major distribution to use the technology for, and with OpenShift compatibility it's coming to the cloud as well. ®

Choosing a cloud hosting partner with confidence


Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.