The big picture revealed and the tech explained
Supported Public cloud providers increasingly differentiate themselves through the features and services they provide. These run from basic storage or content delivery network up to sophisticated flavours of data analysis and increasingly Machine Learning (ML) and Artificial Intelligence (AI).
For the most part, these services are not offered on-prem. To take advantage of them, some portion of your application or your technology estate needs to live in this public cloud.
This is changing the way software is written, with microservices increasingly being used to build the software running in this hybrid world
Software built using microservices is easier to deliver and maintain than the big and brittle architectures or old; these were difficult to scale and might take years to build and deliver.
Microservices mean software that’s relatively quick to build and to maintain on an on-going basis.
From the developers’ perspective, microservices are democratic – a good thing. They can be built using any language. One might, for example, find image-processing libraries in C++ or Java, but be better off doing their AI or ML work in Python because that community has really embraced TensorFlow.
From an organisational point of view, one of the key advantages of microservices is that different groups of developers can work on different aspects of an application completely independently. This also means that different elements of an application can be hosted in completely different environments, too.
In large enterprises, for example, it wouldn't be uncommon to have some services handled by a mainframe, others by public cloud services and the rest by various microservices that run across the multiple data centers where that enterprise operates.
So, these are the benefits. But what exactly are microservices - and how can you incorporate them into your IT infrastructure?
Anatomy of a microservice
Microservices are a piece of a bigger application that performs a specific task. Put many microservices together and you have a complete application. To work, they depend on a message bus that moves messages back and forth – a message bus that doesn’t involve any business logic.
Microservices are small and self-contained, so therefore easy to wrap up in a virtual machine or a container – staples of cloud and hybrid computing. This is generally done to make them easy to manage, and to keep them safe from other workloads that might share the physical host they reside on.
It’s important to note, that while there is much talk of containers in connection with cloud and microservices, that microservices don't have to live in containers. Technically, one could install many microservices onto a single system without any form of virtualization or application encapsulation whatsoever; it's just not clear why anyone would want to.
Making the leap
That’s microservices in a nutshell. But you’re an organisation with a huge software estate. You want to embrace cloud and hybrid IT, but you have some rather big pieces of software to move. What applications best lend themselves to this federated, microservice hybrid way of thinking?
You need to slightly shift your way of thinking. Put to one side the idea of a piece of packaged software. Think, instead, in terms of capabilities and outcomes. Think of infrastructure automation, host environment automation, container orchestration and the container platform. These are the tools, frameworks and environments your microservices will run in.
Infrastructure automation creates and destroys basic IT resources such as compute instances, storage, networking, DNS, and so forth. Here Terraform and CloudFormation are popular. Microsoft's System Center Configuration Manager is an example of a much older school of infrastructure automation, one that isn't used much in the modern DevOps world. ClourFormation is Amazon's baby and it isn't much help beyond AWS's walled garden. Terraform, however, can make multiple data centers – both on-premises and in the public cloud – dance with a few lines of code.
Host environment automation tools configure the environment of instantiated resources. Where infrastructure automation does things like create VMs or provision cloud resources, host environment automation tools ensure that desired configurations are applied to these resources. Here we'll find popular tools like Puppet, Chef, Ansible and Saltstack, many of which can do some limited infrastructure automation as well.
Container orchestration consists of Kubernetes. There remain many competitors to Kubernetes, however, in reality, Kubernetes has won. Kubernetes' job is to keep track of which containers should be running, and on which hosts, and make sure that they're running. If a host dies, Kubernetes lights all the appropriate workloads up on another host. Think of it as vSphere for containers, if you defined how all your VMs worked using YAML text files because the GUI was to primitive to use.
Container platforms consist of Docker, Rkt, and things that are not Docker or Rkt. This latter group doesn't get much play. Docker is the household name in containers, while Rkt is the container runtime made by CoreOS . Rkt is to Docker as Tesla is to Ford, if nobody outside a handful of nerds had ever heard of Tesla.
Developers also need to consider how their development environment is constructed and managed. Vagrant is quite popular for defining development environments, while WhiteSource is often used to ensure libraries, and frameworks that become part of compiled software are kept up to date. DataDog is quite popular for monitoring while BigPanda makes sure alerting is sane.
So we’ve accepted this idea that microservices are a piece of a bigger application that performs a specific task. For example, a microservice might consist of a small web application that receives data and stores it in a database. All this needs to be managed. Enter containers.
In the example of our web application, this could conceivably involve two containers: the web application code along with the webserver to run it, and a database. These might each be stored in two separate containers, with the pair of containers together being called a "pod."
Containers aren't managed like traditional operating systems. They're composable. That is: the contents of individual containers are defined in code, for example: a DockerFile. This DockerFile would define the environment of the containers.
To understand what is meant by “the environment of a container”, consider our basic web app. Most web apps written today include not only the application code and a web server; the application code is dependent upon various libraries, frameworks and so forth.
In the case of some languages – notably Java – the dependencies and frameworks are frequently compiled along with the application code into a single WAR file, however, even in the case of Java this isn't always the case. Defining the dependencies in code, (rather than compiling them,) makes it easier to keep those dependencies up to date.
The container software isolates each container from the next, ensuring that each application gets its own libraries, and one application's failure to update doesn't make another application vulnerable. This is similar to virtual machines, excepting that one doesn't require an entire operating system for every microservice, lowering the resource utilization, maintenance burden and attack surface significantly.
Time to build
Cloud and hybrid are changing the way IT systems are built and maintained. Integral to this new model are microservices. Microservices are wrapped up in the transformation of IT away from workloads that require individual care and feeding and towards workloads that are composable and disposable.
What’s really driving microservices’ adoption, however, isn't any particular philosophy. Rather the fact that they allow for a kind of technology multiculturalism and flexibility in the design, construction and on-going delivery of software that couldn't otherwise exist.
Article supported by HPE
Sponsored: Beyond the Data Frontier