2017 – the year of containers! It wasn't? Oops. Maybe next year
Immature tech still has a bunch of growing up to do
2017 was a big year for containers. One of the biggest container events came from the Linux Foundation, and it was – by its own admission – one of the most boring.
The Foundation’s Open Containers Initiative (OCI) finally dropped two specifications that standardise how containers operate at a low level. Chris Aniszczyk, vice president of developers relations at the Foundation and the OCI’s executive director, likens the initiative’s work to the W3C’s standardisation of HTML. It’s mundane stuff, but you need to do it before you can sensibly do anything else.
The OCI came along to help standardise containers as an emerging technology. Docker had dominated the container landscape, but CoreOS had its own engine. OCI, which Docker co-founded along with the likes of Microsoft, aimed to standardise the way that container engines worked, removing some of the confusion from the lower levels of the technology stack.
That was in 2015, but it wasn’t until July 2017 – a year later than planned – that the first specifications emerged under OCI. There are two. The image specification merged some ideas from
appc, which was the CoreOS container image format, and Docker’s own. It explains how scripts tell containers what software services to pull together before they launch. The other part is the runtime specification. This comes into play after a container has gathered its software components, and explains how the container binaries should run on a supporting platform.
There’s also an important intellectual property part to the OCI that shouldn’t be overlooked, says Aniszczyk. In many of these industry initiatives, members make non-aggressive patent agreements.
“OCI has one of those where all our members agree to forego any container-related patents associated with the spec so they can't sue each other,” he says.
“The industry was in danger of creating a VHS/Betamax situation,” warns Martin Percival, principal solutions architect at Red Hat. “If you as a user want to get all the coolness out of each of those features then you’re a bit stuffed really, because you have to go to each one in turn and work out which is best for any given situation. That’s no way to run any kind of enterprise software.”
Speeding enterprise adoption
That’s important, because containers are still at the point where the enterprise is waiting to embrace them. The Cloud Foundry Foundation talked to 504 users across the globe, ranging from developers through to line of business managers. It found that 25 per cent were using containers, compared to 22 percent last year. Forty-two per cent were evaluating, compared to 31 per cent last year. So interest is higher, but it’s slow -going, and that’s bound to be at least partly down to the technology’s relative immaturity.
Cementing the technology at the lower levels of the stack paves the way for building the kinds of upper-layer services that enterprises need to make all this work in the real world. One example of this is management and orchestration, which Jay Lyman, principal analyst for cloud management and containers at 451 Research, puts at the top of the shopping list for 2018.
Kubernates gains traction
These are areas where the Cloud Native Computing Foundation comes in. A sub-foundation of the Linux Foundation, it focuses on cloud-native software stacks. If the OCI is the cement slab foundation for container-based computing, then the CNCF would be the wooden frame that goes on top of it to build the rooms – the value-added stuff that will make containers and microservices more palatable for the enterprise.
CNCF is the home for Kubernetes, Google’s open-sourced container management and orchestration system. The technology handles what Percival calls the "-ilities" – scalability, configurability, reliability – so that you can manage tens of thousands of containers at once. He believes that pressure from the Kubernetes team was part of OCI’s move forward.
“There’s a clear need for people to move on from just spending their time defining containers and get some real value out of how the things were run,” he says.
The CNCF and Kubernetes gained a lot of traction in 2017. Microsoft made support for Kubernetes generally available in its Azure Container Service in February 2017, and Kubernetes 1.9 supports Windows containers in its alpha release. Microsoft joined the CNCF this year, and so did AWS, which has announced its Elastic Container Service for Kubernetes in preview.
“The other thing is that the CNCF just last month released a list of 32 certified Kubernetes distributions,” says Lyman, adding that certification is going to be a big part of any Kubernetes support moving forward. Amazon won’t make its Kubernetes support generally available until it’s CNCF certified, he points out.
“Without naming names, the CNCF and Kubernetes supporters would be the winners,” says Lyman, articulating who "won" containers in 2017. “The losers might be those that focused on containers inside virtual machines.”
Unhooking from VMs
So what happens next? The needle in 2018 will move towards containers running separately from VMs, or entirely in place of VMs according to Lyman. “The more VM-oriented the vendors, the less forward-looking they are.” This is partly what makes Kata Containers, another project that punctuated the end of 2017, interesting.
The project merges Intel’s Clear Containers and Hyper’s runV, creating containers that run in lightweight VMs of their own. The idea is to create a trade-off between two execution choices, according to Jonathan Bryce, executive director at the OpenStack Foundation that is managing Kata.
The first choice is to run all your containers in a single kernel per server, which maximises efficiency but gives you a low level of isolation per workload. Alternatively, you can run each container in a full VM, which isolates it but creates a high resource overhead.
Kata uses a lightweight VM with a trimmed down kernel that Bryce says provides the same level of security separation as a regular VM.
“We’ll end up with two paths still, but they’ll be much closer than the existing paths of running everything in one kernel or everything and a full virtual machine,” Bryce says.
It’s still effectively running containers in VMs, though. Will truly bare metal operations become more prevalent? Aniszczyk thinks so, pointing out that Twitter did it when he was working there. He cites benefits including performance, and escaping what he calls the “VMware tax”.
“If you look at some of the projects coming out of CNCF and with OCI making containers more standardized, it’s actually easier to run infrastructure on bare metal,” he says. “I think you’ll see more bare metal usage of containers down the road because it’s easier to set up in the past vs setting up a very complicated OpenStack environment.”
There was a clear reason for Amazon to announce bare metal as a service at Re:Invent earlier this month, he adds. “We know Amazon listens to their customers.”
So, 2018 promises more of the same for OCI and CNCF as in 2017. The CNCF is targeting some other areas that will help make containers more digestible for enterprises. One of these is Prometheus, its monitoring project. Other notables include Fluentd for logging, and Notary, for security.
The OCI may have published two specs but it hasn’t completely spent its wad yet. 2018 will probably see it drop a final spec: distribution.
“Right now if you go to fetch and share containers, there’s a whole set of different registries out there,” says Aniszczyk. “Their APIs are all subtly different and may not support all features of distributing containers.”
The firm will take Docker’s registry format – which he calls a de facto standard, formalizing it under the blessing of an OCI specification.
Maybe these formalized specs will help enterprises move their container initiatives along even further in the coming year. ®
We'll be covering DevOps at our Continuous Lifecycle London 2018 event. Full details right here.
Sponsored: Becoming a Pragmatic Security Leader