Let's talk about continuous delivery
Skilful driver required: avoid bumps in the road
Continuous delivery is defined as a process in which software development teams focus on deployment and refinement over and above any imperative to work on new features.
As a technical discipline this is fine in principle, especially if we know where we want to head with the project in hand.
But the navigation and drive for continuous delivery needs some sort of auto-GPS or steering mechanism to keep the wheels aligned.
If continuous delivery sees developers feed code into a versioning service, then we need a management tool to direct us from the start. Our steering mechanism and engine controls here come from a change and configuration management system, without which we may be expending energy for nothing.
Software application developers, and also the non-technical members of the team, need to be able to examine their workflow constantly. If that sounds a bit woolly, we mean that we need to be able to control which software changes are being mainlined into the production engine at any moment.
This control will allow us to merge changes into the production stream without the project coming off the rails. Think of it as a car driver wanting to perform a gear shift from second to third and not a block shift from second to fourth, or even worse a lunge from second to fifth gear that could result in a total loss of momentum.
It gets more complicated when we realise that there is more than one person at the wheel and they all need to be reading the same roadmap. Both the development and operations functions will need full disclosure of all project information and must be able to access it from a common system.
The importance of this will become clearer when we move past development and into user acceptance testing and full-blown production.
It is really hard to do continuous delivery without a good configuration management system. This is because the programming team need to automate build, test and deploy tools and scripts multiple times. They also need to synchronise releases and know how to do a rollback if necessary.
These people are going to need expertise in multiple software tools, which requires a deep layer of intelligent code management to exist at some level or other.
Ok, in fairness, this is more of a task for the DevOps unit than for a pure developer unit, but the programmers may still have to create the documentation, processes or scripts to enable rollback and the like.
The challenge is being able to audit a system to show what was live at any point. We are also going to have to check and see whether our content management system can handle all the different asset types involved – not just source code, but also images, video, binaries, databases and so on.
So where do we turn? We have commercially branded version-control systems and repository tools and we have open-source alternatives.
With Git standing tall as a fully fledged versioning service of choice for many users, surely this is all the toolkit we need? Well, yes and no.
Git is strong but it does have weaknesses in areas such as Java code refactoring. It is granular and powerful on checksum testing but slower when working on various file types such as binary files, where we could argue that the team has less control.
But Git has remained very popular (it is one of Linus Torvald’s babies after all) and has had many advocates, despite its pockets of occasionally limited or partially challenged functionality.
Learn to love Git
This is why we have seen the bigger brand names in distributed version management align to dovetail with Git.
Perhaps some of these moves have been taken in part to uphold our suggestion that the heart of continuous delivery lies in configuration management. Let us dig deeper.
Late last year Perforce built Git Fusion to extend its enterprise version management capabilities to Git repositories. In this scenario, software developers currently using Git can continue to use their preferred Git capabilities but now also get hold of new tools for customisation, reuse and sharing of projects.
The technology proposition here is that Perforce users and Git users should be able to use the tools and methods of their choice, but still maintain a “single source of truth” and, crucially, a single interface for the central revision control system. This could be either Git or Perforce, depending on preferences.
Mark Warren, European marketing director at Perforce, says that if you want to have more than one version control system working together that's fine, but it is prudent to choose two that have had their DNA fused.
They should at least have aligned their roadmaps or partnered so that you can end up with one system of record – or more accurately even, one single system of (software code) record.
These are the essential building blocks for continuous delivery to exist upon a configuration management backbone. Going further, when Warren talks about configuration and version control of software code, he prefers the concept of “version everything”.
Under this umbrella notion (or versioning doctrine) we could be version controlling all the way back to the kernel source code. Better still, we could be version controlling (or versioning if you will) right back up at the modelling and application architecture level.
“The architecture of a modern application has code blocks, scripting, components and configuration information all the way through it. But there are also notes, annotations, compliancy documentation and perhaps even tutorials in video form making up the total application real estate,” says Warren.
“We almost need to redefine the constituent parts of an application and feed all of these elements into our revision control system if we want to stay on track.”
Mind the air gap
As software development teams now work to push out multiple variants of an application across different platforms, these control factors become even more important.
In an ideal world, continuous delivery starts to become ingrained into the core functions of the DevOps team so that it is symbiotically consumed by the automation and feedback functions that these guys live by.
So is an “automate everything” mantra a good idea when it comes to change and configuration management?
Yes mostly, but not absolutely always, according to Warren. “There might be a 10 per cent human factor in continuous delivery and change management in industry areas such as defence,” he says.
“This could be where an ‘air gap’ exists between developers and operations working together, for example a floppy disk is carried across the hall from one area to another to install sensitive data. Here it becomes more a case of automate almost everything.”
We shelve the change and introduce it at the appropriate point
The deployment process may also require peer review – an inherent part of this process. Tools exist in Perforce’s cadre so that developers can stop and request a review at any moment.
But when we say stop we do not mean that we need to stop the application running. We shelve the change, augmentation, bug fixes, new feature or introduction of a new piece of documentation and introduce it at the appropriate point.
Remember the hardware
These techniques work best for software and when there is a lot of automation and testing. Where hardware as well as software is being built, there is also a jump-off point at which you need to stop and make a prototype. Since prototypes are expensive, we want to be able to choose a point where we can perform this action effectively.
Have we convinced you yet that the heart of continuous delivery lie in configuration management, or change and configuration management to be more exact?
It has to be so because continuous delivery without revision control is like drinking from a fire hose, flying blind or anything else that entails a totally misdirected overload of power with no clear strategic end point.
Try continuous delivery without versioning control if you wish, but as a matter of good practice and good sense, we suggest that you don’t try and reinvent the wheel. ®