Home > Back-end >  AWS Lambdas with Docker images: do I have to create a different docker image per lambda?
AWS Lambdas with Docker images: do I have to create a different docker image per lambda?

Time:11-07

I have a big number of lambdas, all sharing the same libraries. Due to size constraints I can not package the libraries together with the lambda neither use the Lambda Layers, so I have created a Docker image (let's call it lambda_base:latest) with all the required libraries installed and deployed it in ECR.

Now, for every lambda, I have created a new Docker image based on lambda_base:latest where the only difference is that includes the lambda's code and it is working fine.

My question is, am I proceeding ok? I would expect to deploy the lambda a one and being able to chose as "runtime" lambda:latest instead whatever image that AWS uses to run the lambda but I don't find how to do that.

Maybe what I am doing is ok but I found weird to create a image for every single lambda.

Thanks a lot!!!

I have created

CodePudding user response:

First, your application Docker image would not be stored inside of the Lambda. The Docker image would be stored in AWS ECR, which is the Container Registry that AWS provides for its customers. You would build your image, tag your image and then publish your image to an ECR repository that you create. The image in this ECR repository can be utilized by any AWS Service that accepts a Docker image, whether it is Lambda, ECS, EKS, Batch, etc. It is not something specific to Lambda, in other words.

Second, I would create an ECR repo per application. I would not think of it as a 1:1 between lambda and ecr, but rather a 1:1 between application and ecr repo. Think of the ECR Repo as the container for a given application. So each repo would have a Dockerfile, which uses the From instruction as so:

FROM public.ecr.aws/lambda/python:3.8 
COPY requirements.txt . 
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}" 
COPY app.py ${LAMBDA_TASK_ROOT}

So this app.py might be MyApp1. and it can be importing a bunch of other files that comprise of MyApp1. This corresponds to an ECR Repo you can call my-app-1. Then you will have a second application, where the app.py and imported files are vastly different and so you would have a second ecr repo to hold the layers of that application.

Then for your Lambda function parameters, you will specify the image uri, which refers to the ECR URI. You will specify your package type as "Image". Here is a rudimentary example in Terraform Infrastructure As Code to illustrate the point:

resource "aws_lambda_function" "function" {
  function_name     = "${var.name_prefix}-my-lambda${var.name_suffix}"
  description       = "My Lambda Function"
  image_uri         = var.image_uri
  package_type      = "Image"
  timeout           = var.timeout
  memory_size       = var.memory_size
  role              = var.role_arn
  tags              = var.tags
} 

The image_uri variable would come from the ECR Repo that was created. So the variable would look something like this:

resource "aws_ecr_repository" "repo" {
  name = "Your Repo Name"
}

resource "null_resource" "ecr_image" {
  triggers = {
    docker_file       = md5(file("${path.module}/../../Dockerfile"))
    app_file          = md5(file("${path.module}/../../app.py"))
  }

  provisioner "local-exec" {
    command = <<EOF
      aws ecr get-login-password --region ${var.region} | docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com
      cd ${path.module}/../../
      docker build -f Dockerfile -t ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} .
      docker push ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag}
    EOF
  }
}

data "aws_ecr_image" "lambda_image" {
  depends_on = [
    "null_resource.ecr_image"
  ]

  repository_name = "Your Repo Name"
  image_tag       = local.ecr_image_tag
}


output "image_uri" {
  value = aws_ecr_repository.repo.repository_url}@${data.aws_ecr_image.lambda_image.id
}

In the above example, I am using Terraform, but you could just as easily reproduce the same scenario in CloudFormation or directly through the AWS CLI.

CodePudding user response:

you don't need to use different images for every Lambda function. It is possible to override command, so you can have the same image for all functions and just override command to point to specific handler for each of the functions. Here are the docs for Serverless Framework where it is specified how you can override command: https://www.serverless.com/framework/docs/providers/aws/guide/functions/#referencing-container-image-as-a-target

If you are not using Serverless Framework, you can override it in similar way in raw CloudFormation or manually via AWS UI.

  • Related