Developers, developers, developers: How 'serverless' crowd dropped ops like it's hot
What does it even mean ...and is it a good idea?
What is serverless? Sure, the name serverless sounds stupid, but serverless technologies like AWS Lambda are increasingly being used by developers looking to build applications quickly.
The key word is developers. Serverless is all about giving developers the ability to execute code without requiring sysadmins. There's no DevOps here, it's all Dev. With serverless, every last vestige of "ops" is offloaded to the serverless platform.
In terms of serverless platforms, AWS Lambda is the default name in serverless technologies, although other implementations such as Project Kratos exist. Project Kratos is open source serverless, allowing organisations to run serverless setups on their own infrastructure. Amazon – and most serverless evangelists – would say this misses the point of serverless entirely.
The purpose behind AWS Lambda is to allow developers to run code without provisioning or managing servers. Using Lambda is similar to the traditional model of deploying code to a server, with the difference being that the server – including scaling and availability – are taken care of by Lambda.
Each piece of code a developer wishes to execute is called a function on Lambda. Essentially, each Lambda function is the code a developer would place inside one container.
Lambda versus containers
In a traditional container solution developers would have to provide both the code they wishes to execute as well as a definition file (typically a Dockerfile). This definiftion file would specify the container environment. For example: a requirement to load libraries and frameworks for PHP with Zend. Every container requires a definition file.
With Lambda, Amazon essentially has a series of pre-canned containers available that provide environments for various languages. Developers that only using libraries provided by the default Lambda environments can create functions as easily as pasting their code into a web form.
Developers who want to modify their environment somewhat – for example to add a library to the environment their code will execute in – will need to create a Lambda "deployment package". This can cause some problems.
Supporting a limited number of languages in fixed environments with Lambda keeps the service affordable. Fewer environments to support is something most vendors strive for.
With Lambda, Amazon has done infrastructure automation from the physical layer all the way to the container layer. This makes it ideal for a specific class of stateless, fully composable microservices. Solving message queuing without getting irrevocably locked in to Amazon, however, requires careful forethought and attention.
What on Earth is Terraform: Life support for explorers of terrifying alien worldsREAD MORE
As one would expect from an automated container environment, all Lambda functions must be stateless. Functions requiring persistent storage can store data in Amazon S3, Amazon Dynamo DB, or a third-party internet-available storage solution. This is where the similarities between microservices on developer-controlled containers and Lambda stop.
Terminate and Stay Resident
Traditional microservices code would require that the developer define some sort of listener as well as a chunk of code to be executed. The developer would have to think about all aspects of this. How is the code in the container going to know when to do something? Will it be receiving an HTTPS request, or an event from a message queue? How will that message be delivered to the container, given that the container most likely has an internal IP address? A load balancer? Message queuing proxy?
With AWS, developers have to worry about much less of this. Lambda is billed as "event-driven development" and this means that creating a Lambda function can be as easy as selecting a pre-defined event trigger, the language the code is to be executed in, and the code to execute. When the event occurs, the Lambda code is triggered.
In a traditional container environment a developer wishing to use, for example, the popular message queue RabbitMQ would have to include the libraries for RabbitMQ and know how to interact with it in order to ensure that their code responded appropriately. This often meant microservices could respond to any number of different events, because developers would frequently bundle up event handling of related events into a single microservice, in part to avoid having to re-implement event handling and in part to avoid microservice sprawl.
Lambda pushes developers in a different direction. Each lambda function would, ideally, respond to a specific event. This makes Lambda functions much closer the ability to call a batch script than to the pseudo-TSRs that Docker-style containerized microservices usually are.
A Docker-style container set up as a pseudo-TSR can theoretically persist sessions as long as the container is up. One could design their application such that the container cached information or was expected to hold ephemeral data for lengths of time longer than the execution of a response to an event.
In Lambda, developers don't have this option. Developers are explicitly advised not to expect their Lambda functions to persist any form of data longer than the time it takes to execute the function. Temp files written to the local container's filesystem, for example, would be erased immediately after execution, and there would be no memory persistence from execution to execution.
In other words, Lambda functions must be completely composable. No exceptions.
Events on Lambda
Lambda functions can be triggered by events. Events can be generated by any number of AWS services. Events can be triggered through traditional pathways for microservice engagement: message queueing, filesystem monitoring and database monitoring. Lambda can monitor S3 storage for uploaded files, watch a Dynamo DB database for updates or listen for messages on Amazon's Simple Notification Service (SNS).
Where Lambda really starts to show its utility, however, is that it can also be triggered by a number of events that aren't normally monitored by microservices. These include reacting to incoming emails, or to proprietary Amazon services like Alexa, Cognito, or Kinesis Streams. Lambda functions can also be triggered with an HTTP call. This is done by defining a custom REST API and using the Amazon API Gateway service.
How Lambda functions are triggered directly affects the cost of running a Lambda function.
Lambda charges per request and/or per millions of seconds per month. Because Lambda functions are only billed when they are triggered to do something, this makes the economically attractive. Functions that are created but never called are never billed. With Lambda, listener microservices can be created and run in perpetuity without running up a bill unless something is actually acting against the microservice.
Services like Kinesis Streams, SNS or Dynamo DB, however, are not free. The event broker you choose helps determine the cost of each execution of a function.
In other words, Amazon doesn't make their coin on the cost of pulling the trigger on a microservice. That's the bit they're giving away for free (or very close to). Amazon makes their coin by getting you to build your microservices so that they use event triggers with costly trigger charges or pull on expensive (and proprietary) Bulk Data Computational Analysis (BDCA) services.
A basic use case
Consider a facial recognition function for a security system. The purpose of the Lambda function would be to accept pictures of faces and scan them against a known database, and store any new faces.
For this use case, a security system that has extremely basic facial recognition built in is presumed. The on-premises security system's facial recognition is presumed to be just powerful enough to tell that there is movement on the camera and that a face is likely exposed to the camera. Once it detects a face it uploads a snapshot of that face into an Amazon S3 bucket.
The upload of this file triggers the Lambda function. The Lambda function calls something like Amazon's own Rekognition, does some work on the image – potentially involving other BDCA tools – and then stores the image and resulting data in any of the myriad available databses. This is not that hard to do.
The function is considered to be "running" for however long it takes the code (and any called services) to execute.
This could be considered a hybrid workload. Extra bonus points if the Lambda function stores the image on a secure third party storage (such as NetApp Private Storage for Cloud), and then scrubs the image from the S3 bucket used for uploads. Then the application relies on a "dumb" on-premises component (the security system), a public-cloud-based serverless function, and uses storage from a third-party service-provider to persist the data. It doesn't get much more "hybrid" than that.
For multi-cloud, create a serverless function in another public cloud (such as Azure or GCP) that would monitor the third-party storage for changes and then perform more work on that image using the proprietary services only available to that platform.
And that, in a nutshell, is serverless for sysadmins. Expect the concepts here will be talked about endlessly as "modern hybrid cloud", "multi-cloud computing", and "hybrid multi-cloud". Also expect hyperconverged companies that just shuffle VMs between infrastructure providers to also cling to these terms, muddying the waters for everyone.
Do not do shots every time you see one of these mentioned at re:Invent or indeed at any time in 2018; you'll quickly go blind. ®
Sponsored: What next after Netezza?