Mathematical approaches to managing defects
Radical new approaches toward software testing needed?
Most of this is conventional wisdom – although not always put into practice – so what makes CbyC different? Well, the system specification is written in Z, a formal specification notation based on set theory and first order predicate logic and developed on the seventies by the Programming Research Group at the Oxford University Computing Laboratory (OUCL).
There is a FAQ here and it has a respectable commercial pedigree: in 1992, the OUCL and IBM were jointly awarded the Queens Award for Industry for the use of the Z notation in the production of IBM's mainframe CICS (Customer Information Control System) products.
Praxis has developed tools that help you automate the verification of the specification and the comparison of the unambiguous spec with the equally unambiguous SPARK code.
If the two don't differ, the only opportunity for defects in your system is that the spec solves the wrong problem (you can verify it for completeness and consistency) – the resources that you no longer need for debugging your code can be devoted to analysing the business domain and ensuring that you're solving the right problem.
This really does work, according to Peter Amey, who has metrics (and that in itself is a sign of a mature process) showing a steady decline in delivered defects over the last decade using CbyC and a steady increase in productivity.
"Of course," he says, "we benefit from Moore's Law, all that unused CPU power can power our verification and proving tools."
He seems to be especially proud of the work Praxis did for the NSA: "The NSA concluded two rather interesting things: (1) the formally-based CbyC development was cheaper than traditional approaches and (2) the software we delivered had zero defects," he claims (see Conclusions in the previously-quoted paper here).
So, why aren't we all using SPARK? There are cultural issues, which mean that CbyC is easier to introduce in a greenfield site. People are frightened of math and proof – and Ada. People whose status comes from their prowess in writing and debugging C++ are unlikely to recommend CbyC to their managers.
And adopting CbyC is a bit of a leap of faith for people unused to proof and formal methods – suppose it is only suitable for simple safety-critical embedded systems and can't cope with the complexity of your business processes?
That last one can only be answered by you yourself reviewing the published case studies here – but how safety-critical, for your career, are the financial control systems your CEO signs off (on pain of a possible jail sentence) to the regulators?
But what about all the modern innovations such as eXtreme Programming and UML (or, rather, the world of Model Driven Architecture, MDA, as UML is just a modelling language)? Does CbyC mean throwing these out? Not exactly, says Peter Amey.
In Static Verification and Extreme Programming (published in Proceedings of the ACM SIGAda Annual International Conference, available here), he and co-author Rod Chapman say: "We were both surprised and pleased to find out how much XP we already do on high-integrity projects."
And, they consider that coding with a human designer and a static-analysis tool such as SPARK Examiner is logically equivalent to pair programming as described by Kent Beck. They posit that the reason Beck doesn't talk about static analysis in an XP context is that the depth it can offer in conjunction with imprecise languages like Java is very limited; and the inefficiency (lack of speed) of static analysis tools not written in and working on something like SPARK can make it infeasible.
As for UML, Amey considers that SPARK confers precision onto the UML model and makes verification of the generated code easier (see High-Integrity Ada in a UML and C World, Lecture Notes in Computer Science 3063 here).
In fact, he believes that using the UML modelling process in conjunction with SPARK formal verification and auto-generation of C from validated SPARK can deliver more robust C. Writing in 2004, however, he considers that "the semantics [in UML alone] are not rich enough for the rigorous reasoning we require in the production of quality software".
However, I believe that this may no longer be so true for UML 2.0, potentially at least, partly because of its well-thought-through metamodel, which is designed to facilitate UML extension; and partly because of the level of semantic detail that can be supported with the Object Constraint Language.
MDA already supports many of the principles behind CbyC (such as generating new deliverables by automatic transformation of previous deliverables, rather than by duplicating and rewriting them), and perhaps the future of "formal methods" (as used in CbyC) for general software development could lie in their incorporation into MDA processes.
For more detail on SPARK, read John Barnes' book High Integrity Software: The SPARK Approach to Safety and Security.
As for formal methods generally, there is a wealth of information at Professor John Knight's University of Virginia website here.
Next page: Summary
Sponsored: Hyper-scale data management