Modern cloud environments provide several different mechanisms and services for deploying a web application. In a more traditional scenario, an application may be deployed to a bare metal server or a virtual machine (VM).
The rise and success of Docker have led many cloud platforms to offer means of hosting dockerised applications without having to manage a VM—AWS lets you do this with Elastic Container Service (ECS on AWS Fargate), and Azure has Azure Container Apps.
Using a VM or a service such as AWS ECS or Azure Container Apps can work well for applications with a predictable load or known maximums and minimums of demand. However, if an application has an infrequent access pattern, running it in a VM or via something like ECS will incur compute costs during potentially prolonged idle periods.
Serverless can be a great contender for applications with infrequent access or extreme bursts of activity, as you only pay for the compute that you use.
Serverless is a cloud computing model where the cloud provider dynamically manages the allocation and provisioning of servers. In this model, the underlying infrastructure is managed for you by the cloud provider, allowing you to focus solely on the application layer. Serverless platforms automatically scale resources based on demand, allowing for highly efficient and cost-effective execution of code – abstracting away the complexities of server management, enabling rapid development, deployment, and scaling of applications.
In this blog post, we’ll explore how an already dockerised application can be quickly and easily adapted to run in a serverless environment using AWS Lambda and AWS API Gateway.
We will start with a brief recap of AWS Lambda and AWS API Gateway before looking at a basic Express application and how it would typically be dockerised. Finally, we will look at the changes needed to allow a typical dockerised application to be deployable by AWS Lambda.
Caveat
Not all applications are well suited to running in a lambda function. Applications that require continuous processing or have a long initialisation time for the environment and application to initialise are unlikely to be suitable. Lambda functions can suffer from a cold start problem, which you can read more about in our article Understanding the Cold Start Problem with AWS Lambda.
The Basics of API Gateway and Lambda
Let’s recap the basics of API Gateway and AWS Lambda.
The following index.js file demonstrates how to implement a basic AWS Lambda function that is capable of being called by AWS API Gateway:
exports.lambda_handler = (event) => {
return {
statusCode: 200,
body: JSON.stringify('Hello, World!'),
};
};
We can use Terraform to create an API Gateway and the associated lambda function. The full example can be found here.
provider "aws" {
region = "eu-west-2"
}
resource "aws_lambda_function" "hello_world_lambda" {
filename = data.archive_file.lambda_function_file.output_path
source_code_hash = data.archive_file.lambda_function_file.output_base64sha256
function_name = "serverless_dockerised_apps_hello_world"
role = aws_iam_role.hello_world_function_role.arn
handler = "index.lambda_handler"
runtime = "nodejs20.x"
}
data "archive_file" "lambda_function_file" {
type = "zip"
source_file = "./function/index.js"
output_path = "lambda_function.zip"
}
resource "aws_iam_role" "lambda_execution_role" {
name = "lambda_execution_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "hello_world_function_role" {
name = "hello_world_function_role"
assume_role_policy = jsonencode({
Version = "2008-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "hello_world_function_role" {
role = aws_iam_role.hello_world_function_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_apigatewayv2_api" "hello_world" {
name = "serverless_dockerised_apps_hello_world"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_stage" "hello_world_main" {
api_id = aws_apigatewayv2_api.hello_world.id
name = "main"
}
resource "aws_apigatewayv2_deployment" "hello_world" {
api_id = aws_apigatewayv2_api.hello_world.id
triggers = {
redeployment = sha1(join(",", tolist([
jsonencode(aws_apigatewayv2_integration.hello_world),
jsonencode(aws_apigatewayv2_route.hello_world_root),
jsonencode(aws_apigatewayv2_route.hello_world_wildcard),
jsonencode(aws_lambda_function.hello_world_lambda)
])))
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_apigatewayv2_integration" "hello_world" {
api_id = aws_apigatewayv2_api.hello_world.id
integration_uri = aws_lambda_function.hello_world_lambda.invoke_arn
integration_type = "AWS_PROXY"
integration_method = "POST"
request_parameters = {
"overwrite:path" = "/$request.path.proxy"
}
}
resource "aws_apigatewayv2_route" "hello_world_root" {
api_id = aws_apigatewayv2_api.hello_world.id
route_key = "GET /"
target = "integrations/${aws_apigatewayv2_integration.hello_world.id}"
}
resource "aws_apigatewayv2_route" "hello_world_wildcard" {
api_id = aws_apigatewayv2_api.hello_world.id
route_key = "ANY /{proxy+}"
target = "integrations/${aws_apigatewayv2_integration.hello_world.id}"
}
resource "aws_cloudwatch_log_group" "hello_world_gw" {
name = "/aws/api_gw/${aws_apigatewayv2_api.hello_world.name}"
retention_in_days = 7
}
resource "aws_lambda_permission" "hello_world_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.hello_world_lambda.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.hello_world.execution_arn}/*/*"
}
output "api_url" {
description = "URL for API Gateway"
value = "${aws_apigatewayv2_stage.hello_world_main.invoke_url}/"
}
Running this through Terraform will create the associated infrastructure will:
- Package up the code as a zip file
- Create the lambda function based on the zip file generated by Terraform
- Create an associated API Gateway that will call through to the lambda function created in step 2.
Output the URL of the API Gateway, e.g.
api_url = "https://abcdef1234.execute-api.eu-west-2.amazonaws.com/main/"
If we ping the API using our browser or curl we’ll get back the quoted string “Hello, World!”:
curl https://abcdef1234.execute-api.eu-west-2.amazonaws.com/main/
"Hello, World!"
Great! It works! But what exactly is AWS doing behind the scenes?
In our index.js file we are exporting a function with the exported name of lambda_handler.
In our Terraform code, we provide the lambda function with our code and set the handler to index.lambda_handler. This tells AWS to invoke the exported function called lambda_handler from the index javascript file.
The key element here is to understand that AWS Lambda does not rely on your code listening on a particular port – AWS Lambda is protocol-less. Instead, the lambda runtime environment will invoke the handler function directly.
API Gateway is used to provide the HTTP protocol and processes the request at a protocol level before translating this into an event that is passed oteh the AWS Lambda runtime environment.
A final point to consider with our example that we have just run through above is that our example will send all traffic to a single lambda function – regardless of the route. With API Gateway you can define the routes in API Gateway and a lambda function per route, but for retrofitting serverless to an existing application, it is easier to proxy all traffic (regardless of route) to your application and let your application deal with the routing as it has always done.
Dockerising an Existing Application
Let’s assume that we have an Express application that serves up a number of APIs that we want to deploy using AWS Lambda.
Our server implementation looks as follows:
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const port = 3000;
// Middleware to parse JSON bodies
app.use(bodyParser.json());
// POST endpoint for adding numbers
app.post('/add', (req, res) => {
const { num1, num2 } = req.body;
// Check if both numbers are provided
if (!num1 || !num2) {
return res.status(400).json({ error: 'Both numbers are required' });
}
// Perform addition
const result = num1 + num2;
// Send response
res.json({ result });
});
// POST endpoint for subtracting numbers
app.post('/subtract', (req, res) => {
const { num1, num2 } = req.body;
// Check if both numbers are provided
if (!num1 || !num2) {
return res.status(400).json({ error: 'Both numbers are required' });
}
// Perform subtraction
const result = num1 - num2;
// Send response
res.json({ result });
});
// Start the server
app.listen(port, () => {
console.log(`Server is listening on port ${port}`);
});
There is one endpoint for adding two numbers together and another for subtracting these numbers. If we run this locally, we can ping the API as follows:
$ curl -X POST -H "Content-Type: application/json" -d '{"num1": 5, "num2": 3}' http://localhost:3000/add
{"result":8}
$ curl -X POST -H "Content-Type: application/json" -d '{"num1": 5, "num2": 3}' http://localhost:3000/subtract
{"result":2}
If we want to dockerise this application, we can do so with the following Dockerfile:
FROM node:20
RUN mkdir /app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD [ "node", "index.js" ]
This file is based off of the node:20 image, and will copy everything within the current working directory into the docker image and run npm install to install the dependencies. Finally index.js will be run and port 3000 will be exposed for use.
With the docker file, the application can be built and run as follows:
docker build -t example-serverless-dockerised-apps-02 .
docker run -p 3000:3000 example-serverless-dockerised-apps-02
The full example can be found here.
Adding AWS Lambda Compatibility
Our application currently works by allowing web traffic to be received on port 3000 and handling the requests accordingly. As we covered in the first section of this blog post, AWS Lambda doesn’t rely on ports. Instead, the lambda execution environment will invoke a given handler function.
What this means is that if we want to be able to invoke our code when running in an AWS Lambda function, we need to move away from listening on a port, e.g:
app.listen(port, () => {
console.log(`Server is listening on port ${port}`);
});
And, instead provide a function that can be invoked by the lambda runtime.
Runtime Interface Clients
For each language that AWS Lambda supports, AWS have provided a Runtime Interface Client (RIC) library. For example:
- aws-lambda-ric for Node
- awslambdaric for Python
- aws_lambda_ric for Ruby
The RIC is able to process the event received by the AWS Lambda execution environment, and pass the event to a given function.
This means that to dockerise an application we need two things:
- The Runtime Interface Client for our given language
- The function that will take the event payload provided by the RIC and pass this on to our wider application – in the case of our example, this event will need passing on to Express for handling.
Adding Lambda Support
As we convert our application to support running in a lambda execution environment, we’ll assume that for local development, we still want to be able to run the application in a conventional setting as a web server that runs on a port. Of course this isn’t strictly necessary as tooling such as LocalStack could be used to run a lambda function for local development – however, this is outside the scope of this discussion.
The result of adding lambda support and maintaining the Express web server for local developments means that our application is going to have two different entry points:
- Running Express on a port for local development
- A function that can be invoked by RIC when running in the AWS Lambda execution environment
The first step is for us to separate the Express port entrypoint of the application from the applications own internal routing logic.
The single index.js file should be split into app.js and index.js where index.js retains the core functionality of triggering Express to listen on port 3000.
// index.js
const { app } = require('./app');
const port = 3000;
// Start the server
app.listen(port, () => {
console.log(`Server is listening on port ${port}`);
});
As a result, app.js becomes:
// app.js
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const port = 3000;
// Middleware to parse JSON bodies
app.use(bodyParser.json());
// POST endpoint for adding numbers
app.post('/add', (req, res) => {
const { num1, num2 } = req.body;
// Check if both numbers are provided
if (!num1 || !num2) {
return res.status(400).json({ error: 'Both numbers are required' });
}
// Perform addition
const result = num1 + num2;
// Send response
res.json({ result });
});
// POST endpoint for subtracting numbers
app.post('/subtract', (req, res) => {
const { num1, num2 } = req.body;
// Check if both numbers are provided
if (!num1 || !num2) {
return res.status(400).json({ error: 'Both numbers are required' });
}
// Perform subtraction
const result = num1 - num2;
// Send response
res.json({ result });
});
exports.app = app;
At this point, whilst we have changed the structure of our application, the functionality hasn’t yet changed. Now that the application logic is housed within app.js we can add a new entry point for the AWS Lambda execution environment to use.
Before we do this, we will need to add Serverless Express as a dependency. This library will provide the logic needed for the handler function, which will be responsible for taking the event payload provided by the RIC and invoking Express as appropriate.
Firstly, run:
npm install @codegenie/serverless-express
Now, we can create a lambda.js file:
const serverlessExpress = require('@codegenie/serverless-express')
const { app } = require('./app');
exports.handler = serverlessExpress({ app })
Next, we need to add the RIC library to our setup. The node RIC library has a number of dependencies that we will need in our docker container; therefore we need to add an apt-get install command to our Dockerfile:
FROM node:20
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN mkdir /app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD [ "node", "index.js" ]
We can now add the aws-lambda-ric library to our package.json and rebuild our docker container.
The full example can be found here.
Deploying to AWS Lambda
The process of deploying our Dockerised app is very similar to what we did with our basic API Gateway/Lambda example at the start of this post. They key difference is that the code for the AWS Lambda function will be pulled from the docker container rather than a zip file, therefore the aws_lambda_function resource should change to be as follows:
resource "aws_lambda_function" "hello_world_lambda" {
function_name = "serverless_dockerised_apps_hello_world"
image_uri = "docker.example.com/yourimage:latest"
package_type = "Image"
role = aws_iam_role.hello_world_function_role.arn
image_config {
command = ["lambda.handler"]
entry_point = ["/app/node_modules/.bin/aws-lambda-ric"]
working_directory = "/app/src"
}
}
Because we are now using a docker image, we do not need to specify the zip file for the code, the handler or the runtime.
By specifying the package_type as Image and providing the docker image via image_url and the associated image_config we provide the lambda function with everything it needs to run our application.
The entry_point of the docker image is overridden to use the aws-lambda-ric executable which will be in the .bin folder of your node_modules directory. The command argument specifies the handler name to bass to the aws-lambda-ric executable – the value of which is lambda-handler because our exported function is called handler and this lives in the lambda file.
The process of deploying the docker image to a docker registry that the function can access is outside the scope of this article.
Conclusion
In this article, we have looked at how an application can evolve from a standard dockerised application into one that can be hosted in a serverless AWS Lambda environment.
Whilst AWS provides a docker base image for working with AWS Lambda, we have stepped you through how the RIC works and how to install a RIC yourself should you have an application that is unable to benefit from using the AWS base images or is unable to extend the base image provided.
Dockerising an application for use with AWS Lambda, at its core, relies on two components. Firstly the relevant RIC needs to be installed within the docker image, secondly the codebase needs an appropriate handler function that can be invoked by the RIC.
In this example we focused on Express and how Serverless Express can help provide the implementation for the handler function.
Different ecosystems have different supporting libraries, for instance, Fastify has @fastify/aws-lambda and Apollo can run in AWS Lambda by operating as middleware via Express and Serverless Express.