This article is more than 1 year old

The Devils of DevOps stick it to YOU

It's not going to save you: You still need data to stop a fail

DevOps can solve anything, can’t it? Well, no. In fact, if you don’t implement DevOps correctly, you’ll find that not only do you carry over the problems of the old world but that birth a few brand new ones, too.

Application Performance Management (APM) firm Dynatrace has consulted a bunch of industry nerds and experts including DevOps icon Gene Kim, co-author of The Phoenix Project book, to identify the risks and problems hidden in the details of DevOps.

First, the good news. Every DevOps ebook like to start off with the feel-good factor, so positive statistics often work best. DevOps stats from a PuppetLabs report dated 2013 and 2014 claim that DevOps, once implemented, can result in 30 times more frequent deployments (yes, we can believe that), twice the change success rate (not that sexy, but believable), teams that are two and a half times more likely to exceed profitability (OK, we get it) and (wait for it) 8,000 times faster lead times (really? Seems a bit rich).

But it’s not all bright lights, fizzy drinks and party cake according to author Kim and his Dynatrace pals.

One of the major activity elements inherent to the world of DevOps is exchange, connection and the ability to pass code and other information from one place to another. This key benefit is also a key incompetency factor. Too many hand-offs between work different groups or individuals is a problem. The number of passes between teams has a negative impact on team communication, alignment and automation. All of this, it appears, means that lead-time increases on projects.

Further, if you have lots of work bouncing back and forth between teams, then there’s a growing risk that teams will act independently of each other unless a concerted effort is made to ensure they are set goals that they are measured on.

There’s an added complication as people organize into teams founded on processes. In the past, developers have struggled to reproduce test or production failures. This has led to the generation of an ambiguous log message from the production environment. The problem here stems from the fact that developers have no clue how to fix the issue to hand because they don’t have vital factual and contextual data about a fail. That problem doesn’t go way just because you’re now "doing" DevOps. In fact, thanks to immutable lifecycle and micro services, it becomes more acute: more teams working on parts or complete applications or services demands a more unified view of performance data across those teams.

Also, there’s pressure on resources. More projects and teams working on the same software or service increases the pressure on resources in both dev and test. Yes, you might have one hundred VM instances on AWS, but what happens when project number 101 is mandated or you buy a new dev and test team of five people? It’s back to the AWS contract re-negotiation board.

According to Dynatrace, what it calls “key resources” are can easily be overused. All too often, however, they are sucked up for ad-hoc tasks. “Shield this unplanned work, increase flow and reduce work in progress,” Dynatrace said.

Testing is a hidden issue. The tact that testing happens too late in the software system development lifecycle is not news. But now, in a DevOps world, we can say the sentence in a more expanded form. Automated testing happens too late in the DevOps-centric software system development lifecycle where code and/or configuration changes are made at what is supposed to be a higher cadence. There’s a need, therefore, for some kind of automated testing discipline to be create so problems are not found, and solved, late into the lifecycle.

Another problem with DevOps is that it controls the flow of work and places it in methodical order ready to be auctioned by appropriate engineers. Except that’s not always what happens. According to Dynatrace: “Work sits in-queue as opposed to being actively in-work [so that] your work teams are not building quality in at the beginning by implementing automated testing and deployment strategies. Work isn’t being completed on time and quality suffers.”

The answer is to cut work up into smaller packages and make quality their number-one priority.

New IT initiatives under the label of DevOps dramatically increasing the number of possible vectors for attack. This has produced an increase in the numbers of cryptographic keys and digital certificates in enterprises.

Kevin Bocek, chief security strategist Venafi, told The Reg: “Gartner reports that by 2017, 75 per cent of businesses will have strategic initiatives that cause IT to run in two separate groups: one that continues to support long-term, existing apps that require stability and another that delivers fast IT and supports DevOps teams that are focused on innovation and business-impacting projects.”

So what’s the way to exorcise such devils? Gene Kim is the DevOps guru and author of the Phoenix Project – this generation’s equivalent of the earlier era’s cultural touchstone piece marketing and other businesses on "doing" community The Cluetrain Manifesto. His answer: “Improving daily work is even more important than doing daily work,” Kim says in his book.

As toasted cheesy cheese as this kind of statement might be, it might remind us that DevOps is no cure-all and its very existence can screw a few things up.®

More about

TIP US OFF

Send us news


Other stories you might like