Original URL: http://www.theregister.co.uk/2013/09/05/continuous_delivery/

Continuous delivery: What works (and what doesn't)

Software on the assembly line

By Adrian Bridgwater

Posted in Developer, 5th September 2013 10:30 GMT

The notion that we might just as well automate everything is common in the perpetually dynamic world of continuous delivery.

What does continuous (software application development) delivery actual entail and is it a practical solution all the time?

It is effectively like refuelling your car while driving. Or perhaps a better analogy would be a factory assembly line that produces cream cakes. While new raw materials are pumped into hoppers and tanks feeding the manufacturing process, the assembly line keeps moving to produce more cakes.

Everything is continuous unless quality control or management decides to stop the production line because of errors, sub-standard products or for maintenance.

Let them eat cake

It is the same for software as for cream cakes: the assembly line (the programming team) works constantly to feed raw materials (blood, sweat and code) into the production plant (revision control system) so that the cake mix (automated integration phases) can keep producing new end product.

Cake tasting by batch (static analysis) is joined by higher-level quality control (unit testing) and the finished product is delivered to the cake shop (the pre-production or production environment) continuously.

This all works fine in principle, unless of course the consumer really wanted bacon sandwiches in the first place.

It is even worse if the consumer has a gluten allergy that precludes eating cakes anyway. In other words, the user is more important than the process, however efficient it is.

Even if we accept all the above, what processes should still be manual and why? Moreover, how do we mitigate risk throughout the lifecycle of a continuous delivery project?

Websecurify, a vulnerability scanning company, explains that during the test phases of a continuous delivery project we should see automated security testing tollgates employed to identify vulnerabilities.

“If a critical vulnerability is identified, the process is stopped and feedback delivered to the development team. The pipeline cannot complete before the critical issues are remediated, therefore ensuring better security,” says the firm.

Websecurify further details the difference between static (white-box) continuous delivery automated security testing that works on the application source code, as opposed to dynamic (black-box) analysers that perform real-time tests simulating an actual attack.

Security attacks are not the only continuous delivery risk and vulnerability: we need to examine application robustness and carry out stringent debugging procedures from front to back. But pure-play security is not a bad place to start.

So is continuous delivery ever a mistake? Why keep pushing releases into production if the software is not effective and efficient in the eyes of the users?

Too much too young

Is there a risk of too much too fast for customers – and shouldn’t users be in charge anyway?

ThoughtWorks chief scientist Martin Fowler has commented that a state of continuous delivery is experienced when the software team prioritises the need to keep the software deployable over and above working on new features.

This is a state where push-button deployments of any version of the software project can be channeled to any environment (or platform) in an on-demand manner. This goes some way to putting users in charge, but not the whole way by any means.

Fowler and his team remind us that in a term of programming activity, continuous delivery (a term coined by ThoughtWorks) comes down to a process of building executables and running automated tests on those executables to detect problems.

From that point we can push the executables into production (or at least pre-production or production-like) environments as they start to work. The theory is that this allows the team to build incremental extensions to the way software works on the basis of what the users have requested in the first place.

Connecting users back to the continuous delivery chain via a process of diligent requirements gathering is key to making sure that it is not a case of too much too fast for customers.

This is when continuous delivery Nirvana is achieved: it is not just a question of continuous delivery, but also one of continuous requirements analysis and user satisfaction. This particular Nirvana is not easily gained without concentrated meditative effort.

As continuous delivery practice lead for Europe at ThoughtWorks, Kief Morris echoes that sentiment.

"The most important places to have manual steps in the delivery process are reviewing feedback and data about how people are using the software, deciding what changes to implement, and implementing the changes,” he says.

"The purpose of automation in continuous delivery is to allow the team to focus their attention on what to build, rather than spending their time pushing bits onto servers."

Featured on Facebook

So who does continuous delivery and where does it work well? It is said that Facebook deploys it at least a couple of times (or more) a day with its “ship early and ship often” intonation; it therefore delivers continuously to facilitate this dynamism.

With a web-facing front end and a cloud-located back end, this is fine for Facebook. Users wouldn’t notice a new delivery any more than they would a page refresh. Outside the web application the process is slightly more involved.

“Continuous delivery is sometimes confused with continuous deployment. But continuous deployment means that every change goes through the pipeline and automatically gets put into production, resulting in many production deployments every day,” says Fowler.

“Continuous delivery just means that you are able to do frequent deployments but may choose not to do it, usually due to businesses preferring a slower rate of deployment. In order to do continuous deployment you must be doing continuous delivery.”

The trouble is that it works beautifully on paper, but the applied science may be somewhat less exact

The trouble with continuous delivery thus explained is that it works beautifully on paper, but the applied science may be somewhat less precise. Not everything is as easily automated as might be hoped.

For example building an integration or staging test platform that closely emulates a real production environment is probably not a trivial task. Databases have to be built and populated, licences for software installed and so on.

Mark Warren, European marketing director at Perforce, warns that the tests may take longer to execute than the cycle time between deliveries from development. Imagine the backing up of cream cakes from the production plant if the conveyor belt through quality control is running at only a tenth of the speed – it will get messy quickly.

“There are scenarios where automation is desirable, such as a front-end social network platform where the cost of continuous updates may be low and impact of failure cheap,” says Warren.

“However for a back-end payment processing system that needs the same level of availability as the national electricity grid, changes must be deployed less frequently. In highly secure environments, such as payment handling, there may need to be an air-gap between development and production systems.

“Someone will need to walk the floppy disk across the room. Where manual steps are required, the process and recording of progress has to fill the gap.”

Repeat after me

In some respects the most common continuous delivery pipelines are not all that different from traditional waterfall processes. In Continuous Delivery, the de facto Bible on the topic by Jez Humble and David Farley, you will find images of the lifecycle that include familiar phases such as “user acceptance testing”.

The difference is that there is a lot more automation so the process can execute more rapidly and can be repeated at will.

From Warren’s perspective inside Perforce, a company marked out for its distributed versioning service, automation depends on a couple of key capabilities: some kind of tooling that enables fast, predictable and cross-platform scripting and a single repository or “single source of truth”.

“The scripting angle is pretty well covered with tools like Puppet, Chef and a few commercial offerings. The single-source-of-truth repository is effectively the version management system (the production plant) that software developers are used to, but now there are additional demands on its performance,” says Warren.

Notably, this version management system needs to be able to version everything – not just source code but also the “entire build, test and deploy environment job, and possibly the built executables too”.

Warren concludes: “If multiple-version management tools are in use, the complexity involved in ensuring a consistent and whole deployment is hugely increased. If it can’t handle, say, large binary files as well as small JavaScript source, then there is no guarantee as to what is being deployed at any point.”

Where do we stand in 2013? We can say with some certainty that the shadow of cloud, mobile and web-facing applications on every corner dictates an increased need for continuously delivered rapid application development and deployment.

So why isn’t continuous delivery more prevalent across the common vernacular of tech?

It will be, trust us on that one. ®