More like this

DevOps

Big, fat fail? Here's how to avoid that: Microservices and you

Time to start acting 'small'

Girl magnifying glass, photo via Shutterstock

Sure, we've all heard about "microservices" but just what use are they and why would you want them? How do you even start designing microservices?

Most applications built more than a couple a years ago would more than likely use classic application design methodology that is monolithic in nature, i.e. everything is bundled into a single – or perhaps several – virtual machines.

At best, a three-tier architecture would be used to separate the components of the application into a web tier, application tier and database tier. This three-tier approach allowed the first attempts at scalable architectures and allowing rudimentary scaling capabilities.

Such designs also came with more than a few issues that most developers and administrators would readily recognise. By effectively having the one large application it became difficult to manage from the point of view of both continual development and support. Inevitably the code grows larger and – as it does so – becomes ever more difficult to manage. In effect, developers tend to throw in everything but the kitchen sink into a single VM and that came with a large downside.

As resources became constrained, the CPU or memory count grew until they became the unwieldy monster machines many of you will be familiar with. The bigger the server, the less efficient it became. It isn't uncommon to see these monster machines that only get utilised for eight to ten hours a day. I have actually seen entire clusters set aside for peak periods of the year, moving and resizing the virtual machines to cope with a yearly peak. Such a method of working is grossly inefficient and potentially wastes tens of thousands on underutilised hardware.

To add to this complexity the application testing methodologies meant that every time an application was updated or modified, every component of the application (should) be retested as there may be interdependencies that no one saw or anticipated that could have caused undesired operation.

Moving to a more microservice-orientated architecture helps limit the requirements for retesting and validation of the overall application as a whole. Using microservices means that each service is broken down and is tasked to provide a small part of the larger service.

Microservices can be thought of as the virtualisation version of the UNIX philosophy: “Do one thing and do it well.” Providing a full virtualised instance for one service may well sound a bit wasteful with an operating system on each instance.

In the classic virtualisation stack, this would be considered very wasteful but new technologies such as Docker eliminate a lot of the overhead associated with virtualisation and make the design around microservices a compelling argument. Docker essentially creates a tiny VM that rests upon a Docker host.

In isolation these microservices are of little use. They need to be able to function to provide the whole application service. The glue that effectively holds these together is the REST API, which enables interaction between the various components in a loosely coupled manner.

There’s a couple of great things about designing using micro services:

  • Such compartmentalisation lends itself to continuous development of different components of the applications (services), with each service continuously developed and rolled into new releases. This can mean reduced testing load, potentially cutting development time. Internally the changes may be large but to the outside world (other applications and developers) there would be little or no indication of this. *
  • Each microservice can be considered purely in the context of the code and the API that it directly interacts with. This does away with the time and complexity associated with stopping, starting and continuously running those older, large applications.
  • Most cloud providers offer tiered services with different performance characteristics. That’s because each microservice can be placed on a specific tier that provides the performance traits required for the micro service. For example: CPU intensive tasks can be placed on tier with a fast disk or using SSD, coming with gobs of memory.

So what technologies make up microservices? This list is not all-encompassing, but is here to provide an idea of the overall tool set.

Micro service instances

Docker is essentially an application virtualisation where many virtual instances (Docker containers) can co-exist on one or several hosts and essentially virtualise only the components that make up the Micro service. Docker provides a highly managed templated infrastructure that is built to provide that rapidly available and elastic micro service.

Management layer

Microservice instances may only last for twenty minutes or it may last two weeks. They are, however, born to die so no stateful data should be stored on in them. Death can occur for many reasons - hardware failures or just the instance is no longer needed now the usage peak has passed. Management software should let you navigate the complexity and ensure help to ensure there are sufficient nodes to service the incoming requests as well as the required scaling component.

Version control

Version control is a key component of any system that scales. Being able to properly manages the changes are key. Version control should be at the heart of everything you do with any project, not just micro services.

Proper design and implementation

Before anyone gets any clever ideas, micro services need to be designed properly from the group up to be elastic and designed to do their one job properly. Trying to make current workloads fit is a recipe for disaster. This is why trimodal IT is the way forward.

Designing microservices is a complex business but it reaps rewards in terms of elasticity, scalability and the ability to more dynamically and economically manage the workloads.

This kind of design is what public clouds were designed for. Once correctly implemented it can lead to not only higher availability but also reduction in running costs.

There is a learning curve here but the potential rewards are huge. Best of all this technology is free to download and experiment as well as use in production. You have no excuse. ®

Sponsored: Global DDoS threat landscape report