This article is more than 1 year old

Lost containers tell no tales. Time to worry

Keeping a grip on Docker

Containers are becoming the de facto way of spinning up new services and applications. Many are running on cloud servers which themselves are virtual machines running on bare metal, well... somewhere in the world.

For many developers, containers are a way to create hermetically sealed application services.

But once started, containers can be forgotten by the very individuals who made them or the organisations that run their host server estates.

And that’s a problem: do you know how many containers your company's cloud is running? Even if you’re not in the cloud, your local Linux boxes could be teeming with unmanaged applications!

The problems start with the development cycle. Early on, developers may be playing with application services, getting used to the technology and firing up a few containers on a development sandbox – the idea being that the development machine will get blown away at some point and replaced.

But the application is never finished, the development takes a turn in a different direction and bingo, the containers continue to run, consuming small but measurable amounts of company resources, be it money or energy (which is actually money) or just bandwidth (money). It all adds up, but might be just too small to notice until the final bill rolls in. Then it’s too late!

The problems don’t stop once the application is up and running. Developer churn might mean that sloppy documentation creates a situation where no one is sure quite what all the containers are actually doing. You can try turning the mysterious containers off, but what happens to the app if they don’t restart? And then there is the problem of access rights. Any good cloud solution will let you create a new user to attach to the base VM, but then you’ve got to get into the containers (which shouldn’t be a problem) and work out what they are doing (which most certainly could be).

If you're very lucky, the containers will have reasonable names, but if the developer just treated them as cattle then good luck working out what CC-APP-W61 is doing without a thorough search of the creation script and the application code. Some of your containers will be using the base VM file system for storage of configuration files and application data. Are you monitoring the file system for storage space problems? Can it be gracefully expanded as you need it?

No doubt some of the apps containers will be long-lived, and if the service is simple enough it will probably never need to be changed. But if the creation script selects the latest version of the application’s host software (perhaps a webserver or database software) in the container, there’s a chance it will break when a new version arrives when the container is rebuilt from the creation script.

Of course if you select an actual version for the container, then it will be open to security problems that may be lurking unpatched in the application's source code. The bottom line is: do you know what version of application software is running on each of the containers in your application?

Most of these problems can be alleviated by proper documentation and source code control. However, containers are built to be easy to create, and, as such, your developer and DevOps staff are going to create lots of them. This is why you are going to need a good way to group them into manageable units that can be easily looked after.

Google has been using containers for over 10 years and learned a thing or two about managing the problem. Google has taken its experience in projects such as Borg and Omega to create Kubernetes, which manages automatic deployment and scaling of containerized applications. Other tools are also available to manage your container-based applications – tools such as Mesos, Swarm and Fleet.

What's in there?

If you’re deploying a large-scale container-based application, you will certainly need something to manage it, if only to get a good idea of what’s running and where it is. But here’s the thing: building a container-based application means you must be able to trust your developers and DevOps people. There could be a lot of little applications running in those containers started up from lots of little creation scripts. What if one of those containers is a “dark container,” one that’s running a small amount of code you don’t intend to run on your servers. What if something has slipped through your careful code reviews?

It could be a single line in the Docker script that injects just enough malicious code (or small application) to cause your company a nightmare down the line. Perhaps it will take out your services, flood the network, or quietly ask for services, usernames and passwords or any other manner of bad things. We’ve become used to passing code through some penetration testing, but do you do the same with containers? Do you know exactly what each of the apt-get install statements do in the container creation scripts?

Containers are no doubt a good thing. They have opened up whole new ways of building applications, of doing more with less, spending less time on firing up servers and generally enabling new ways of working.

They are not, however, immune from all the ills that plague the industry, and if you don’t keep a careful eye on all stages of application development, they may turn dark and hurt you. ®

More about

TIP US OFF

Send us news


Other stories you might like