Reg comments24

This ONE easy cloud trick is in DANGER. Why?

Port problem, captain: Microservices to the event horizon

cloudy spaceship illustration. photo by shutterstock

Legacy, or technical debt – call it what you will – has always been a major challenge to techies looking to move forward and never more so than now, as you're being asked to shift data centre software to the cloud.

Possibly the biggest challenge in dealing with legacy is identifying who “owned” an application when it was created – when they'll almost certainly no longer be present.

Any documentation left behind - often many years ago - by the implementation team will almost certainly leave a little (a lot) to be desired.

That’s a problem, because you need to know the dependencies that have built up over time and which would impact any at migration.

You need to start a process of mapping and discovery.

On far too many occasions, the job of deriving meaningful maps of legacy data centres has fallen to me. Thankfully there is a cheap clue, that even the sands of time cannot erase, that is present in network monitoring data: the communication ports on which traffic starts and finishes.

Comms traffic is relatively cheap to obtain – modern routers will report on traffic, and this measurement approach does not require coverage of all the distinct VMs or IPS, just the routers. Equally, it is very difficult for any element of an application in a DC to hide; if it’s doing anything, it’s communicating, and using specific ports.

Finally, it is very unusual for developers to change default ports, the combination of IP (DNS name if lucky) and port is usually unique, so there is little need to change the port. A modest bit of Googling and you can get default port allocations in pretty much your favourite data format, a key clue in unravelling what’s going on.

As an example in the diagram below are the four stages of simplification – a map of a synthetic data centre. In order of complexity (mushiness) are:

  1. the complete original traffic pattern between distinct IP addresses;
  2. the pattern after port numbers are used to take away DHCP traffic;
  3. after removing DNS traffic by filtering out those port numbers;
  4. and finally, backup traffic, again identified by its relevant port numbers.

At the end of this process the communication structure is considerably clearer – although possibly not in the pattern that would be widely anticipated:

The four stages of simplification – a map of a synthetic data centre

This filtering process can continue further, using other filters, again simply based on port numbers.

The importance of ports to the process of data centre archaeology – and the useful laziness of developers who do not change defaults – cannot be overstated. Without this information, the task may well prove impossible.

Reverse engineering

Unfortunately an emerging approach to the design of large-scale IT systems is a major threat to our ability to tease them apart in the future. The micro-service approach usually exploits the http(s) stack and will by default communicate on the default ports of 80 and 443.

For the future data centre archaeologist confronted with a poorly documented (probably cloud) based system, the comprehension task may well prove impossible, without manual inspection of the code on every server. Presented with a group of IPs all communicating on 80 will be challenge enough, but at least packet inspection may give a clue as to the APIs in play. However, on 443 even that will be impossible.

Currently, the number of micro services systems in deployment is relatively low so this issue is one for the deep future (about five years in IT space). Indeed, as micro services are often in the continuous care of a DevOps team, it might be claimed that the immediate support/dev team (love) will never go away.

Over time, however, reality is likely to bite and the systems will move into stable maintain – no Dev, maybe some Ops. For these systems in the medium term future, if no attempt is made to distinguish traffic at port level, then attempts to decompose the system - and hence understand it again once it enters its inevitable legacy phase - may well prove impossible.

At this point dev/null may be the only valid destination for these systems, hidden by the micro-service event horizon... from which no system can return. ®

Sponsored: The Joy and Pain of Buying IT - Have Your Say


Biting the hand that feeds IT © 1998–2017