Original URL: https://www.theregister.com/2006/02/01/development_risk/

Doomed from the start: considering development risk

Hell for a developer is ...

By David Norfolk

Posted in Software, 1st February 2006 11:32 GMT

Comment "Hell" for a developer is working on a project that is doomed to failure from the start. No matter how ill-conceived the business case and unreasonable the external constraints, the hapless developer still has a good chance of collecting the blame. And even if he escapes the massacre of the innocents (following on from the promotion of the guilty), the odour of failure clings to all involved.

But even worse is the feeling of working hard and professionally on something that is ultimately pointless. Why this train of thought should have led me onto a consideration of the NHS (National Health Service) National Programme for IT (NPfIT; see also NHS CfH; Connecting for Health is the DH agency responsible for delivery of NPfIT) I'm not sure, but this project does seem to exemplify one with high scores in all the risk categories I'd review before starting a project:

• It's a very large project, and the Government's record with large projects certainly isn't better than anyone else's.

• It involves massive changes to existing systems.

• It cuts across organisational boundaries (hospitals and GP surgeries, and uses outsourced services).

• It has legal/regulatory issues - doctors are responsible for the governance of patient records, and the Data Protection Act applies to much of the information.

• It is a highly visible project, raising considerable press interest.

• Top management (in this case, probably even our Prime Minister) is taking a lively and, possibly, ill-informed interest.

• It has safety-critical aspects.

• Resources are limited and, in theory, tightly controlled.

• It involves new technologies.

• Few of those involved can have much experience with similar projects - US healthcare is very different and the NHS is an unusually large operation, even in a global context.

An important first stage in any project is risk assessment, looking not only at project risk (the risk of the project failing) but also the operational and business risks the project will have to address (often, and somewhat misleadingly, called "non-functional requirements". Far better to embark on a high risk project with your eyes open, and some risk mitigation strategies and contingency plans, than with the "positive attitude" (aka hysterical optimism) so beloved of many of my past employers.

So, are there unconsidered risks with the NPfIT NHS project? Well, as I write this it looks as if there may have been, as a written answer in Parliament has disclosed that only £234 million (of the £6.2bn estimated for the project), under 5 per cent of the contracts outstanding, has been paid by CfH over 2 years in. This implies that the project as a whole may be slipping, and may introduce further risk, if suppliers aren't being paid enough to service their contracts properly.

Moreover, the installation of the new technologies is not proceeding as smoothly as one would like, I read, with service reliability and availability problems plaguing connections to the NHS spine over Christmas and the New Year. However, as CfH says, "The upgrade release to the NHS Care Record Service over the weekend of December 17 and 18, 2005, was the largest and most complex to date." But should that really mean that we must expect problems?

Inspired by our recent Mumps or M story (and M is clearly alive and well in many UK GP surgeries - readers mentioned EMIS and Protechnic Exeter as using M, in the past at least; both now have NPfIT solutions too), some of our correspondents involved with UK healthcare systems have contributed their points of view (no attributions, as it seems to me that speaking out here could well be career limiting). These make interesting reading

One clinician commented: "Personally, I'll be glad to see the back of the Mumps based system as it (or its implementation) is not up to what I need as a clinician in the new NHS. However, while I hope for a decent replacement, the chaos of the NHS IT program just leaves me worried that whatever is going to replace the old MUMPS version is going to be so 'improved' that it will be slower, restricted, inefficient and unreliable - just like most of the other 'innovations' in the NHS."

Obviously, not much resistance to the idea of innovation there, but indications of a failure to actually achieve stakeholder "buy-in" (for all stakeholders including users, developers and regulators) before starting the new developments - a crucial first step in any software development project.

A correspondent with experience of one supplier commented: "The [M-based] Protechnic Exeter product in question was originally developed by a cooperative of GPs."

I'd agree that it's always good to involve the target users heavily if you want true buy-in - and, in this case, where GPs are responsible for the security of patient records, almost essential - although I'm less happy about them being entirely responsible for development (application development is a science, or at least a craft, in its own right), even though this could be very successful in specific cases.

One of the reasonable expectations for NPfIT, of course, is improved operability from the elimination of fragmented technologies. But this implies a potentially huge integration problem (and it is thus interesting that InterSystems Caché, one of the suppliers of legacy M technology, and much more these days, has a rather impressive new integration product called Ensemble on offer).

A reader with experience working for a health systems supplier comments that his company "had to abandon one large-scale integration project when two customers combined, because a feasibility study showed that it wasn't. They simply handled data too differently. These two customers were both using our highly specialised (if quite flexible) product; organisations using different products have a correspondingly lower chance of interoperating".

And, of course, if you bring in new and untried (in these applications) technologies, that may or may not scale to the volumes required, will this make integrating existing data easier? And, if you're dependent on fewer suppliers, each successfully selling platforms aimed at the general marketplace, will you be able to pressurise them into adopting your agenda rather than their own?

I don't want to be too negative, however. A correspondent working in the NHS commented: "I don't think GPs should be writing software, though. My impression is that by and large NHSIT do have a good grasp of the situation and technology, and that with a bit of luck and proper funding and commitment (and there's the key, it has to be over at least 10 years for the current drive to work), the NHS's IT infrastructure will end up in a much better state, even if it also has new problems." Let us hope so, but proper risk assessment and management early on could help reduce the requirement for luck.

There is a huge developer resource available to most project managers, the developers' (and users') "organisational memory" of why things are as they are, which is all-too-often neglected (and often lost forever when the dubious financial benefits of outsourcing are discovered). This is why calling in outside contractors can increase some risks (while mitigating others - your in-house developers know well where they've come from, but may have little actual experience of where they're going).

It's also why agile techniques, such as using stories in eXtreme Programming to capture "requirements", can reduce risk, because stories tap into "organisational memory" (unless you've sacked it); and why federating legacy applications as services is becoming popular. Even in the new NHS, it might be possible to retain organisational memory by "wrapping" existing applications to look, and interface, just like new NPfIT applications.

Another of our readers, Dr Adrian Midgley, represents, perhaps, just one example of (informally published) "organisational memory" in the NHS for designing GP Clinical Systems, with his series of Hubris essays. These were mostly first published on the academic ("for varying values of academic"), gp-uk jiscmail list funded by the Joint Information Systems Committee (JISC).

Mind you, corporate memory at the sharp end isn't the only neglected resource. A lot of knowledge has been documented in resilient, random-access devices known as books. As long ago as 1975, Fred Brooks in the Mythical Man Month was documenting the "second system effect".

This is the tendency to correct all the problems with your first system when you get a chance to revisit it, resulting in an overcomplicated system addressing minor issues with yesterday's technology, which completely fails to exploit current and emerging technologies. The classic example is OS/360, which had great overlay programming facilities but failed to implement virtual memory.

But, it seems to me the new NHS architecture may represent a typical 20th century centralised Government system, instead of a federated 21st century service-oriented architecture - that could exploit existing NHS technology (especially that implemented in GP surgeries). Many of the problems with NHS modernisation seem to come from the fact the NHS already has extensive local IT systems in GP surgeries and elsewhere, and centralisation is being imposed from above, through hospital trusts.

In support of this view, one reader commented that "the current consolidation is forcing local systems to change for non-local reasons, hence forcing an upgrade cycle with its attendant problems when it wouldn't usually be needed". In other words, there may be benefits from consolidation for national NHS management, but the pain associated with the changes will be felt by the GP surgeries and current technology suppliers. Will GPs be fully supportive in these circumstances, and is there, for example, some risk associated with asking suppliers of current technology to close themselves down tidily as NHS suppliers, possibly, in preparation for an empty future?

The new isn't always obviously better than the old, even when there are good strategic reasons for moving on. I myself remember being told to persuade secretaries that the early Windows Word Processors were "better" than the faster DOS-based systems they were used to - when they were only better in the sense that they allowed managers to do their own word processing, albeit slowly, and get rid of secretaries. And, of course, secretaries were quickly re-instated as management status symbols, so then we had the costs of inefficient word-processing (sometimes performed by highly paid managers) and the previous secretarial costs too. A real victory for business process redesign!

One reader with experience of both old and new NHS systems commented: "The M version [of one supplier's system] still seems to be a far more reliable, stable, faster, cheaper implementation than the full GUI version. M was a wonderful language for RAD, and the speed of the database was truly frightening, MS-SQL still cannot get close to its performance [perhaps he hasn't tried SQL Server 2005 yet, but all the same...]. We used to have 20 dumb terminals running off a 486, and 'cos it was all vt100 the response of each terminal was instantaneous. Also 'cos it was all text based, the users quickly became used to the 'eclectic' interface and data entry was far quicker than the new fangled Windows interface they use in the shiny new GUI."

Hmmm, it's not the first time I've read such comments, and they'd encourage me to put a lot of extra effort into risk mitigation and getting user buy-in to any NHS project I was unlucky enough to be involved with. Also, I might consider keeping the old technology around for a bit, just in case it is needed after all. ®

David Norfolk is the author of IT Governance, published by Thorogood. More details here.