This article is more than 1 year old

When does a system become legacy?

The hero to zero of IT systems

Last time we were interested in understanding a bit more about the state of our IT systems, and whether we were being held back by what we could loosely term 'legacy'. Perhaps a trickier question to answer however, is how we decide whether a system is legacy or not.

In the ideal world, when we built IT systems, we'd do so with a good understanding of requirements, and quickly enough that it would deliver a useful service on day one. Even if we had such an IT Shangri-La however, it would only remain so for a short while. Things change, and so therefore does the relevance of any given system. All systems follow some kind of decay curve, where consecutive events impact on a system's effectiveness such that after a point, the benefits of having the system in place no longer outweigh its costs.

To further complicate things, we need to keep in mind that we're not thinking about monolithic and isolated application stacks any more. Modern systems are more likely to be integrated with each other, and have dependencies with off the shelf components such as workflow engines and content management tools. Perhaps we should even throw virtualisation into the pot - is a legacy application running on a legacy OS still legacy, if the whole lot is packaged up into a virtual image running on a blade server?

How, then, should legacy be measured? Is it worth reviewing systems for such things as "functional coverage" (OK, I just made that up) and 'business relevance', or is it just a case of waiting for the complaints to reach a certain level? Are there any tell-tale signs of when a system slips in the value rankings, and should therefore be relegated to legacy status? Or is the only valid approach to be ad-hoc and reactive, to deal with the fall-out of decisions such as new system deployments, merger activity and so on? Let us know what you think.

More about

TIP US OFF

Send us news


Other stories you might like