Google: 'EVERYTHING at Google runs in a container'

Ad giant lifts curtain on how it uses virtualization's successor TWO BILLION TIMES A WEEK

Boost IT visibility and business value

Google is now running "everything" in its mammoth cloud on top of a potential open source successor to virtualization, paving the way for other companies to do the same.

Should VMware be worried? Probably not, but the tech pioneered by Google is making inroads into a certain class of technically sophisticated companies.

That tech is called Linux Containerization, and is the latest in a long line of innovations meant to make it easier to package up applications and sling them around data centers. It's not a new approach – see Solaris Zones, BSD Jails, Parallels, and so on – but Google has managed to popularize it enough that a small cottage industry is forming around it.

Google's involvement in the tech is significant because of the mind-boggling scale at which the search and ad giant operates, which in turn benefits the tech by stress-testing it.

"Everything at Google runs in a container," Joe Beda, a senior staff software engineer at Google, explained in some slides shown at the Gluecon conference this week. "We start over two billion containers per week."

Two billion containers a week [Two BEEELLION!?—Ed.] means that for every second of every minute of every hour of every day, Google is firing up on average some 3,300 containers. It's probably started over 40,000 since you began reading this article.

The company is able to do this because of how the tech works: Linux containerization is a way of sharing parts of a single operating system among multiple isolated applications, as opposed to virtualization which will support multiple apps with their own OS on top of a single hypervisor.

This means that where it can take minutes to spin up a virtual machine, it can take seconds to start a container because you aren't having to fire up the OS as well.

This is beneficial for massive distributed applications with lots of discrete parts that need to be summoned, run, and then killed in short order. It's also much more efficient from a CPU utilization perspective, which matters if you're an IT-focused organization like Google.

The main tradeoff with the approach is in security, because if someone can break out of a container and modify the underlying Linux OS, they can own all other containers on the system, whereas virtualization prevents this type of contamination.


Google began its journey into containerization in the mid-2000s when some engineers donated a tech named cgroups into the Linux kernel. This technology "provides a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour."

In the same way Google's publication of the MapReduce and GFS papers let Yahoo! engineers create open source data analysis framework Hadoop, the addition of cgroups let to further innovations by other companies.

The kernel feature cgroups became a crucial component of LXC, LinuX Containers, which combines cgroups with network namespaces to make it easy to create containers and the ways they need to connect with other infrastructure. While useful, this still required sophisticated users.

That is, until startup Docker came along.

"The technology was not accessible/useful for developers, containers were not portable between different environments, and there was no ecosystem or set of standard containers," explains Docker's chief executive Ben Golub in an email to The Register.

"Docker's initial innovation was to address these issues," he writes, "by providing standard APIs that made containers easy to use, creating a way for the community to collaborate around libraries of containers, working to make the same container portable across all environments, and encouraging an ecosystem of tools."

Docker's approach has been remarkably successful and has led to partnerships with companies like Red Hat, the arrival of Docker containers on Amazon's cloud, and integration with open source data analysis project Hadoop.

Google's take on containerization is slightly different, as it places more emphasis on performance and less on ease of use. To try to help developers understand the difference, Google has developed a variant of LXC named, charmingly, lmctfy, short for Let Me Contain That For You.

Google describes lmctfy as "the open source version of Google's container stack, which provides Linux application containers. These containers allow for the isolation of resources used by multiple applications running on a single machine. This gives the applications the impression of running exclusively on a machine. The applications may be container-aware and thus be able to create and manage their own subcontainers," the company explains on its Github page.

"The project aims to provide the container abstraction through a high-level API built around user intent," Google writes. "The containers created are themselves container-aware within the hierarchy and can be delegated to be managed by other user agents.

For its part, Docker isn't threatened by lmctfy, and plans to run it as an optional execution engine within the Docker software.

"Docker initially provided the APIs and standardization on top of LXC tools," Golub told us. "With the release of 0.9, we added the ability to have swappable execution environments, so that we can now put docker around LXC, libvirt, libcontainer, systemd/nspawn, and other lower-level container formats. Work is happening to support LMCTFY as an execution engine under Docker (Google's open source format), and some interesting work is even being done to wrap Docker around Zones, Jails, and Parallels."

Google has also offered an olive branch to Docker by adding in support for Docker containers on its "Google Cloud Platform" in recognition of the enthusiasm with which the tech has been adopted.

With Docker's inaugural conference a few weeks away and Google's Eric Brewer slated to give a keynote, we'll be sure to bring you more information about how these two approaches to the same technology develop, intertwine, and better the lives of developers.

Do you use containerization in your business, and if so, what does it do for you? ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
prev story


5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.