Original URL: https://www.theregister.com/2007/01/03/developing_legacy_systems_part1/

Adopt and adapt better than rip and replace...

Developing with Legacy Systems – Part 1

By David Norfolk

Posted in Software, 3rd January 2007 17:44 GMT

If the internet has done nothing else, it has made the parochial world of proprietary systems appear outdated. Software architecture "A" now had better interoperate with software architecture "B", or risk rejection as now being unfit for purpose. And all of them need to interoperate with language "X" and application "Z". But in practice, this is also the case when the languages and applications are ancient and venerable, because they are still running tasks critical to the survival of the business.

The arrival of SOA has made this an imperative now, for "sweating the assets" is now a strong mantra amongst business and organisational users. But developers and business managers need to work together to decide what actually constitutes an "asset". It is easy to assume that the asset is just the data, which can then simply be ported to a new environment. In practice, however, the asset is more commonly both the data and the application that created it, with the pair often being inseparable (in part, because the application may represent the only detailed documentation of the underlying business requirements there is; as well as containing hardcoded “master data”).

Risk management

It is also the case that any change away from an existing asset carries risk as well as advantage: change can mean failure as well as success. So, integrating legacy assets that still perform a valuable task for the business remains a serious and important option for developers to consider in business terms, regardless of how much they might relish the technical challenge any change constitutes.

This does not mean legacy integration is the de rigueur option. The possibility of change, to redevelop an application as a service component designed for an SOA environment, has to be one of the developers’ options. This will become one of the developers’ more pressing options over time, if only because the staff with the knowledge and experience needed to maintain many legacy applications are part of the "baby boomer" generation and are now nearing retirement age.

Vendors are, therefore, now offering businesses with established legacy systems a growing range of options that can integrate and absorb those important applications into the newer world of SOA and the Internet, without forcing the risks associated with "rip and replace" approaches.

Legacy shift

One option has seen HP, because of its commitment to Intel’s Itanium processor as the heart of its high-end server hardware range, take an important step with two of its acquired legacy environments – DEC’s old OpenVMS and Tandem’s old NonStop (see the whitepapers here and HP NonStop evolution here). The company has invested in porting these environments to run native on Itanium, which at least gives existing applications a current, more maintainable server platform to run on. This will give both systems an extended lease of life; which is important as they both still run business critical applications. But as both also offer potential advantages for the management of complex business transactions across the Web, it will be interesting to see if HP also takes the further step of promoting them to new users with support and developer education programmes.

HP is also a contributor to another move to shift existing legacy applications into the service-based world. Together with HP and Intel, Oracle used OracleWorld last year to launch the Application Modernisation Initiative, which sets out to analyse and assess an existing portfolio of mainframe applications and then propose a standards-based solution. This will run on a reference architecture built on HP/Intel hardware and management software together with Oracle’s database and applications, plus its Fusion Middleware and Grid Controller. In practice, this approach is only likely to appeal to existing Oracle-on-HP users and is as close as any to solving the legacy issue via the "rip and replace" approach.

A classic mainframe application environment is the 40-year-old Information Management System (IMS). This is still going strong, and still attracting specialist vendors such as Legacy Migration Solutions and Seagull Software, who are coming up with ways of integrating existing applications into web-based environments. Seagull already has an IMS integration offering, but its latest development, the LegaSuite IMS Gateway, bypasses the normal requirement of using IMS Connect to integrate IMS transactions. This means that it is now possible to integrate directly with interfaces such as XML, WSDL, Java Command Beans and .NET assemblies without requiring any additional IMS code.

Beyond technology

It is, perhaps, worth restating that one of the key reasons that any developer should be considering legacy integration options is in order to build the most appropriate and cost-effective solution to meeting new requirements in terms of doing business. This is why the integration of legacy applications is already moving on to a level beyond technical integration with the latest interface protocols and standards. There is, for example, now a growing requirement to ensure that the legacy functionality at the heart of past and current business processes can be fully integrated with future business processes, particularly as they develop to encompass more complex and diverse service requirements.

An example of this is a partnership deal struck last August between DataDirect and webMethods. The former is the data connectivity arm of Progress Software, which bought NEON Systems and its Shadow mainframe integration technology a year ago, while the latter specialises in business process integration. The objective is to provide an environment in which business processes that encompass both legacy mainframe and distributed web-based applications can be managed and monitored from a single point. The Shadow system is deployed directly onto the mainframe to provide not only web service integration but interoperability between multiple mainframe environments. As this mimics the capabilities of the increasingly common Enterprise Service Bus (ESB) technologies now forming the backbone of many SOA installations, the company has decided to dub it a `Mainframe Services Bus’.

And everyone wants to get on a bus these days…®