Under Octata's covers: Resource management, scheduling, containers

OpenStack's Jonathan Bryce and Mark Collier explain the best bits of the new release

Interview Last week, OpenStack took the covers off its Ocata release. Today, The Register spoke to OpenStack foundation executive director Jonathan Bryce and COO Mark Collier about three key aspects of the release – Cell v2, the Placement API and Resource Scheduler, and OpenStack's expanding container support.

Cell v2 brings a new architecture for resource management. Bryce explained that until Ocata, the Nova compute module funnelled resource management calls to a single API point. That arrangement was starting to run into scalability issues.

“The prior generation had a concept of a single API endpoint, single entry point to the compute cloud. If you ended up scaling to thousands of physical servers, that one API entry point had to talk to all those machines,” Bryce told us.

Network latency alone becomes a problem in that model, so the OpenStack developers have spent a year breaking the single endpoint into more manageable chunks.

“Now, you can have smaller chunks of hundreds of servers that each operate in a 'cell', and they are rolled up together into an aggregation API.”

To the end user, the environment still looks like an “endless pool of resources”, Bryce said, but the data centre operator can manage things in “a more scalable, more sound way”.

Cells v2 addresses scalability of two key aspects of OpenStack: the database, and the message queue. If, for example, a 1,000-hosts deployment is broken into two cells, the 500 hosts in each cell have smaller database and message queue – and also less traffic.

As well as scalability, there's a resilience benefit, since losing the database or message queue in one cell doesn't affect another.

As explained in the video below, Cells v2 is also a response to increasing complexity: if you're using bare metal in one environment and virtual machines in another, they can be grouped so each cell is homogeneous.

Youtube Video

Placement API and Resource scheduler

Resource scheduling – the software that takes a virtual machine provisioning request and finds a server to take the request – is the other capability Bryce highlighted as a major part of the release.

In Ocata, “the Placement API became the default for how Nova scheduling is managed,” he said.

One reason for the revision, he said, is that private clouds have broadened their scope. “Once, private clouds were focussed on automating virtual machines.

“Now, private clouds have a mix of virtual machines, containers, and bare metal, running Web applications, mobile apps, network function virtualisation (NFV) for telecommunications, and enterprise software like SAP.”

Those workloads have different requirements, so the resource scheduler can't just treat the hosts as identical – it needs to understand a workload's requirements and choose the right pools of hosts.

“For example, if you're trying to run phone calls or mobile data, you'll have specific networking gear for the network function virtualization,” he said.

That's where the Placement API comes in: it allows an admin to “intelligently request specific attributes when you provision a workload.” Phone calls can run on a particular server configuration, machine learning lands on a server with GPUs, and Web apps can deploy to generic servers.

There are other benefits to be had out of giving the scheduler a richer description of resources.

Before the Placement API, Bryce said, “if you wanted to do anything complex, you had to get into the code.

“Placement opens that up. And it's not just about virtualization – it includes placement information for virtual machines, bare metal servers, virtual storage, and networking.”

That also improves sysadmins' visibility over their environments, he added – and that makes the API useful even in smaller cloud environments, since “you care about where your inventory is, and what's running where”.

Making containers easier

Container-based deployment and management tools have been a hotbed of activity, with OpenStack saying Zun (container management), Kolla (deployment tool) and Kuryr (container networking) all seeing strong growth in terms of contributor numbers.

“Over the last year or so, we've started to see a lot of connections between Kubernetes, Docker, and OpenStack overall,” he said.

“That was a big theme in Newton, and we saw progress in Ocata.”

Instead of treating containerisation and virtualisation as separate technologies, he said, they're coming together as tools that are used in combination.

Using OpenStack to manage compute, storage, security and multi-tenancy, and then expose that upwards to an environment like Kubernetes is highly scalable, he said.

Better control over container environments also helps companies whose systems are subject to security and regulatory requirements, he said.

“It brings those containers into the enterprise networks in a way that works with existing workflows that networking teams are used to”.

Mark Collier outlined other work to watch out for in the upcoming Pike release cycle.

The Cola lifecycle management environment, he said, will help small and medium companies save money by getting their workloads off hyperscale clouds to private environments. ®

Sponsored: The Joy and Pain of Buying IT - Have Your Say


Biting the hand that feeds IT © 1998–2017