Successful DevOps? You'll need some new numbers for that

How do you know your feature flags and canary launches worked?

Maths, image via shutterstock

Dark launches, feature flags and canary launches: They sound like something from science fiction or some new computer game franchise bearing the name of Tom Clancy.

What they are is the face of DevOps - processes that enable projects to run successfully.

And their presence is set to be felt by a good many as numerous industry surveys can attest.

With DevOps on the rise, then, the question becomes one of not just how to implement DevOps but also how to measure the success of that implementation.

Before I get to the measurement, what about how to roll out DevOps? That brings us back to that Tom Clancy trio.

Let’s start with dark launches. This is a technique to which a new generation of enterprises have turned and which is relatively commonplace among startups and giants like Facebook alike.

It’s the practice of releasing new features to a particular section of users to test how the software will behave in production conditions. Key to this process is that the software is released without any UI features.

Canary releases (really another name for dark launches) and feature flags (of feature toggles) work by building in conditional “switches” to the DevOps code using Boolean logic, so different users see different code with different features. The principle is the same as with dark launches: companies can get an idea as to how the implementation is handled without running full production.

Donnie Berkholz, 451 research director, reckons processes like these in place you can’t simply re-employ old metrics such as meantime between failure (MTBF) and mean time to repair (MTTR).

Berkholz told The Reg: “Many folks have been discussing a shift from MTBF to MTTR, but very few of them are discussing how this approach coupled with techniques like microservices, dark launches, feature flags, and rolling deployments can lower the overall impact of any issues.

The key indicator is not the measurement methods but how the product is being implemented and in these circumstances, it’s important to get the business objectives right.

Kief Morris, cloud practice lead at Thoughtworks and author Infrastructure as Code, is well aware of some of the difficulties in measuring success. The whole process should start with establishing the aim of a DevOps project at the start; too often, this can be left vague.

Above all, there should be clear indications as to how success is measured. “Companies should be asking: ‘What is it that we need to achieve?’ For example, is it getting new features out quickly? Or fixing bugs?” He says that companies shouldn’t lose sight of the ultimate aim of any project however. “The metrics can be over-powering. The map is not the territory”

With that in mind, there are some traditional metrics to contemplate, some of them more business orientated. “You should be looking at things such as revenues and sales.”

Berkholz endorses this business-based approach. “It's critical with any technology to go back to your desired business outcomes, focusing on the benefits and their implications rather than features,” he says.

It’s also important not to be too guided by the DevOps vendors. There are plenty of tools available to help the process but, again, there’s a need to keep focused on what’s really required.

“Being automated is not an end in itself, it's the means to increased agility at lower risk. Vendors have a tendency to approach things inside-out, selling whatever they've got as what enterprises need. Instead, enterprises need to ensure they're building their own list of requirements and outcomes, then finding the right vendor, set of vendors, or systems integrator to create what they need,” says Berkholz.

One area where users could create a metric that's a bit more useful is in having an overall view of deployment, a broader business-based one - not solely the province of the techies.

“One metric I'd love to see more often is speed from feature request to deployment,” says Berkholz.

“This spans from help desk and service management on the feedback end or the line of business and business analysts through product management, development, security, QA, and ops. This is the kind of metric you typically don't encounter outside of environments where broad-scale DevOps and continuous delivery is the norm.”

This is a view that’s endorsed by Morris, who says companies should take an overall view of metrics. “How long does it take for someone having an idea and how long to get the feedback whether that idea has worked?” he asked.

With different processes in place, is there a danger that projects can get too focused on the metrics: does the measurement over-ride the process itself? Yes, according to Morris.

“Metrics are an approximation – if you treat them as a goal in themselves you get unintended consequences. For example, writing scripts that are malleable just to get those test results,” he said.

But you dispense with metrics at your peril. “Metrics are critical,” states Berkholz. ”If you aren't measuring, you have no idea whether you've improved, gotten worse, or stayed the same. Without knowing that, every single change you make is no more than a guessing game built around a HiPPO approach [highest paid person's opinion].

“We're starting to see a rise in truly data-driven businesses, and ensuring that we measure outcomes rather than CPU and memory is what will help IT collaborate with the rest of the business.”

But the crucial point of DevOps metrics is this: they should not be the preserve of the technical departments alone. All DevOps projects should be driven by business needs, meaning there must be better communication between all sides to implement the projects properly. ®

Biting the hand that feeds IT © 1998–2018