AWS Lambda provide serverless runtime, lots of developers love this service. we could use many languages on aws lambda, e.g: nodejs, python, java etc. but what if we have some customised c library, or custom utility package to install into the runtime env ? now AWS lambda support deploy docker image to lambda env

docker_and_lambda

The docker image must implement Runtime API, AWS already provided base images for lots of programming languages to extend , so we only need to develop logic and starting from the based image to customise the docker image. take python image as an example, reference from https://docs.aws.amazon.com/lambda/latest/dg/images-create.html

FROM public.ecr.aws/lambda/python:3.8

# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}

# Install the function's dependencies using file requirements.txt
# from your project folder.

COPY requirements.txt  .
RUN  pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ] 

we could use AWS Serverless Application Model to manage the build and deployment. a sample config file could like below

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  python3.8

  SAM Template for xxx

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 600

Resources:
  SampleFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      PackageType: Image
      Environment:
        Variables:
          LOG_LEVEL: INFO
    Metadata:
      Dockerfile: Dockerfile
      DockerContext: .
      DockerTag: v1

Outputs:
  SampleFunction:
    Description: "xxx Lambda Function ARN"
    Value: !GetAtt SampleFunction.Arn

we could use command sam build to build the docker image and use command sam local invoke SearchablePdfFunction --event events/sample_request.json to trigger the lambda locally

we could see log like below

REPORT RequestId: e207ea8b-fe68-4249-bfef-181f4f0b3098  Init Duration: 1.07 ms  Duration: 125.20 ms     Billed Duration: 200 ms Memory Size: 128 MB     Max Memory Used: 128 MB 
"Hello from AWS Lambda using Python3.8.11 (default, Jul 14 2021, 13:00:16) \n[GCC 7.3.1 20180712 (Red Hat 7.3.1-13)]!"%  

sometimes we may want to use our own images and customise the image, we could also install the lambda runtime client manually to make it work, for example, I want to use OCRMyPDF docker image to process pdf files, we could do the following

# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.9"

# Stage 1 - bundle base image + runtime
# Grab a fresh copy of the image and install GCC
FROM jbarlow83/ocrmypdf AS Image-Base
# Install GCC (Alpine uses musl but we compile and link dependencies with GCC)
RUN apt-get update && apt-get install -y python3-distutils python3-dev default-libmysqlclient-dev build-essential
# Install aws-lambda-cpp build dependencies
RUN apt-get install -y \
  g++ \
  make \
  cmake \
  unzip \
  libcurl4-openssl-dev
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie

# Stage 2 - build function and dependencies
FROM Image-Base AS build-image
#Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function
COPY *.py requirements.txt ${FUNCTION_DIR}

# Optional – Install the function's dependencies
# RUN python${RUNTIME_VERSION} -m pip install -r requirements.txt --target ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}
# Optional – Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install -r ${FUNCTION_DIR}requirements.txt --target ${FUNCTION_DIR}

# Stage 3 - final runtime image
# Grab a fresh copy of the Python image
FROM Image-Base
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
COPY entry.sh /
RUN chmod 755 /entry.sh
ENTRYPOINT [ "/entry.sh" ]
CMD [ "app.handler" ]

the entry.sh is used to switch between local dev env and lambda env

#!/bin/sh
if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
    exec /usr/bin/aws-lambda-rie /usr/bin/python3 -m awslambdaric $1
else
    exec /usr/bin/python3 -m awslambdaric $1
fi