Firing up the Erudine engine
Building with behaviour
Stage 5: The model is run in order to test the behaviours it has captured (this is a combination of integration and user acceptance testing) and the system "plumbing" completed.
Stage 6: It now just remains to choose from various deployment options:
- If a greenfield site or new development, you can deploy the Erudine model as a conventional new system – cut over to the tested system from existing, possibly manual, processes (if any) once you're sure it's ready. However, Erudine seems not to have much actual experience of this.
- You can run the Erudine model in parallel with the original system and then cut over once everyone is comfortable with it. This is the safe, normative, option but it can be difficult to manage duplicate information or system state during the parallel run. This is an expensive option in the short term) and assumes a mature, well-organised company.
- You can use the Erudine model as "requirements doc" for a cheap rewrite using conventional coding techniques (probably using outsourced, cheap, programmers). This is the comfortable option and appears to minimise risk, but you are unnecessarily duplicating effort and forgoing the maintenance benefits Erudine promises (although resurrecting the Erudine model for maintenance will be cheaper than the initial build). If you rewrite the Erudine model you are probably writing new legacy. It's not really risk-free even in the short-term: it takes longer and you need to manage the rebuild quality and ensure that the Erudine behaviour isn't compromised in the rebuild.
A typical Erudine sell is based on identifying a pain point, such as a legacy system which must be replaced for good business reasons and for which a conventional rebuild is infeasible (or has actually failed). Generally, the sell goes:
- First, build new system related to the legacy target (important enough to matter; not so important as to be a company-killer) as proof of concept; then
- Second contract, recreate the whole legacy system behaviour with Erudine; and
- Potentially, maintain the system by changing its behaviour in the self-testing Erudine models.
As with any new approach, there are issues to consider. The Erudine approach to legacy reclamation is impressive, more so than the superficially attractive "put an object wrapper around your legacy and deploy it as a service" approach (which reads well but has serious problems in the detail – building standards-based chaos for one; and whether the legacy does break neatly into cohesive services for another). However, it may not be the only feasible option.
Micro Focus and others have mature automated tools for refactoring and understanding legacy systems by analysing the source code (if you still have it). Micro Focus, of course, has positive case studies (eg, one from as long ago as 2000, here). And, Compuware, for example, provides 4 GL tools such as Uniface for rapidly recreating legacy systems in a more agile, business-oriented environment. Compuware's approach may be an unfashionable one, but I think it's also a workable one.
Erudine offers a different approach and probably a more integrated one, with an attractive maintenance story. What makes it different is the underlying mathematical model of behaviour and consequent automatic consistency checking (as I said, this is very hard to assess objectively, as it is a secret "black box"); and the fact that it explicitly manages "knowledge" with conceptual graphs.
If it delivers on its promises, it moves rule-based systems to a higher level of knowledge management (the rules community was originally AI focused, now it plays down AI, perhaps Erudine puts it back, to an extent). Nevertheless, Erudine is difficult to evaluate without case studies – and case studies could succeed or fail for reasons unconnected with the use of Erudine. In any case, most of Erudine's best customers aren't talking publicly. Nevertheless, the secondary evidence of Erudine's workability from Gartner etc and from unattributable sources appears good. ®
Sponsored: Hyper-scale data management