Feeds

Red Hat and dotCloud team up on time-saving Linux container tech

First they conquered the seven seas, now containers be forcin' VMs to walk the plank

Secure remote control for conventional and virtual desktops

Red Hat is working with startup dotCloud to co-develop new Linux container technology to make it easier to migrate applications from one cloud to another.

The partnership was announced by the companies on Thursday, and will see dotCloud's open source "Docker" Linux-container technology get enhancements to work with Red Hat's OpenShift platform-as-a-service technology, as well as the removal of Docker's dependency on the Advanced Multi Layered Unification Filesystem (AuFS).

In addition, Docker will be packaged for the Fedora project, broadening the technologies it can work with. Eventually, the tech will optionally support some Red Hat-specific Linux components such as SELinux and libvert-lxc.

"We've had over 20 people from Red Hat contributing to code for our upcoming release," Ben Golub, chief of dotCloud, tells El Reg. "This [collaboration] enables Docker to work out of the box with all of the Red Hat family of Linux, which is clearly one of the most important platforms for us to support."

Docker is a containerization technology that builds on the lxc (Linux Containers) component of the Linux kernel, and simplifies accessing the technology for developers.

It uses lxc's techniques for namespace isolation and control groups, and builds on this with technologies for bundling applications and app dependencies so they can be flung from machine to machine.

Lxc is geared toward the creation of lightweight fast-boot servers that don't need much RAM, and Docker adds to this to let it work for the deployment of full applications. It comes with a build tool that lets developers assemble a container from their source code while using popular tools such as Maven, Chef, Puppet, Salt, and so on.

Just as virtualization lets you gin up multiple virtual machines on a single chunk of hardware, containerization lets you virtualize the operating system atop the Linux kernel and then have apps running on that. The strength of this is that each app will be running on exactly the same version of the underlying Linux OS, which frees up resources and enhances predictability.

"Here what you're trying to do is say 'Here is a tenant of the operating system', and this tenant can have multiple apps and processes," Xen-hypervisor pioneer and current CTO of security startup Bromium, Simon Crosby, tells El Reg. This lets "multiple independent tenants have tenant notions of isolation," he says.

This approach beats VMs in terms of resource utilization, as the OS copy is shared across all apps running on it, whereas virtual machines come with the abstraction of separating each OS onto each VM, which adds baggage.

"In the case of lxc containerization, the goal is to provide a more memory-efficient way to deliver multi-tenancy," Crosby says. "What it gives you is a far more efficient mechanism if all your tenants want to use the same version of the OS."

This means that admins will need to shut down containerized apps when they upgrade the underlying kernel, though this happens infrequently enough that it may be acceptable to those willing to accept a bit of downtime.

It also means Docker isn't distribution-specific. "We just rely on the kernel," Golub says. "As a result, you can upgrade to the latest version of Fedora or Ubuntu and soon RHEL without having to change your containers."

Depending on exactly the same kernel does mean that if a problem occurs it could affect all of your apps, and the same is true of a security vulnerability. However, the work done by the Linux community and Docker's use of underlying technologies like SELinux should protect admins against this.

"I think containers are much more lightweight and have much lower overhead than virtual machines. A container basically provides isolation but runs everything inside the host's operating system so it's much lighter weight," Golub says. "While virtual machines are a great tech, they're not particularly good for iterative development. Not particularly good for migrating across clouds.

The one criticism leveled by the community against Docker is that it is repackaging a bunch of underlying Linux services, and gaining lots of attention for work done by the wider community.

Golub feels this is a bit unfair as the company has made technologies like lxc "significantly easier to use," while working to ease the upgrade process and spawning the Docker registry.

"The most important thing is we've standardized the way containers can be used, so it's now very easy to make them portable across different systems," he says.

With the Red Hat collaboration, developers now have another major distribution to use the technology for, and with OpenShift compatibility it's coming to the cloud as well. ®

Intelligent flash storage arrays

More from The Register

next story
Just don't blame Bono! Apple iTunes music sales PLUMMET
Cupertino revenue hit by cheapo downloads, says report
The DRUGSTORES DON'T WORK, CVS makes IT WORSE ... for Apple Pay
Goog Wallet apparently also spurned in NFC lockdown
IBM, backing away from hardware? NEVER!
Don't be so sure, so-surers
Hey - who wants 4.8 TERABYTES almost AS FAST AS MEMORY?
China's Memblaze says they've got it in PCIe. Yow
Microsoft brings the CLOUD that GOES ON FOREVER
Sky's the limit with unrestricted space in the cloud
This time it's SO REAL: Overcoming the open-source orgasm myth with TODO
If the web giants need it to work, hey, maybe it'll work
'ANYTHING BUT STABLE' Netflix suffers BIG Europe-wide outage
Friday night LIVE? Nope. The only thing streaming are tears down my face
Google roolz! Nest buys Revolv, KILLS new sales of home hub
Take my temperature, I'm feeling a little bit dizzy
Storage array giants can use Azure to evacuate their back ends
Site Recovery can help to move snapshots around
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.
How to simplify SSL certificate management
Simple steps to take control of SSL certificates across the enterprise, and recommendations centralizing certificate management throughout their lifecycle.