I have created a rest api lambda using spring boot. When I create a jar file from this and deploy it to localstack with terraform, i can call the api and it works as expected.
But when I instead create a docker image from my code and adopt terraform to use that image_uri I get the following error when I call the api.
Lambda runtime initialization error for function arn:aws:lambda:eu-west-2:000000000000:function:restapi: b'{"errorMessage":"Error loading class com.example.lambda.StreamLambdaHandler: Metaspace","errorType":"java.lang.OutOfMemoryError"}'
And this is the terraform:
variable "STAGE" {
type = string
default = "local"
}
variable "AWS_REGION" {
type = string
default = "eu-west-2"
}
variable "IMG_URI" {
type = string
default = "localhost:4510/com.example-restapi-lambda:1.0.0"
}
variable "FUNCTION_NAME" {
type = string
default = "restapi"
}
variable "FUNCTION_HANDLER" {
type = string
default = "com.example.lambda.StreamLambdaHandler"
}
provider "aws" {
access_key = "test_access_key"
secret_key = "test_secret_key"
region = var.AWS_REGION
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = var.STAGE == "local" ? "http://localhost:4566" : null
cloudformation = var.STAGE == "local" ? "http://localhost:4566" : null
cloudwatch = var.STAGE == "local" ? "http://localhost:4566" : null
cloudwatchevents = var.STAGE == "local" ? "http://localhost:4566" : null
iam = var.STAGE == "local" ? "http://localhost:4566" : null
lambda = var.STAGE == "local" ? "http://localhost:4566" : null
s3 = var.STAGE == "local" ? "http://localhost:4566" : null
}
}
resource "aws_iam_role" "lambda-execution-role" {
name = "lambda-execution-role-${var.FUNCTION_NAME}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_lambda_function" "restApiLambdaFunction" {
image_uri = var.IMG_URI
function_name = var.FUNCTION_NAME
role = aws_iam_role.lambda-execution-role.arn
handler = var.FUNCTION_HANDLER
# handler = "org.springframework.cloud.function.adapter.aws.FunctionInvoker"
runtime = "java11"
timeout = 60
environment {
variables = {
MAIN_CLASS = "com.example.lambda.AWSLambdaApp"
# JAVA_OPTS = "-Xmx5g"
}
}
}
resource "aws_api_gateway_rest_api" "rest-api" {
name = "RestApi-${var.FUNCTION_NAME}"
}
resource "aws_api_gateway_resource" "proxy" {
rest_api_id = aws_api_gateway_rest_api.rest-api.id
parent_id = aws_api_gateway_rest_api.rest-api.root_resource_id
path_part = "{proxy }"
}
resource "aws_api_gateway_method" "proxy" {
rest_api_id = aws_api_gateway_rest_api.rest-api.id
resource_id = aws_api_gateway_resource.proxy.id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "proxy" {
rest_api_id = aws_api_gateway_rest_api.rest-api.id
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.restApiLambdaFunction.invoke_arn
}
resource "aws_api_gateway_deployment" "rest-api-deployment" {
depends_on = [aws_api_gateway_integration.proxy]
rest_api_id = aws_api_gateway_rest_api.rest-api.id
stage_name = var.STAGE
}
resource "aws_cloudwatch_event_rule" "warmup" {
name = "warmup-event-rule-${var.FUNCTION_NAME}"
schedule_expression = "rate(10 minutes)"
}
resource "aws_cloudwatch_event_target" "warmup" {
target_id = "warmup"
rule = aws_cloudwatch_event_rule.warmup.name
arn = aws_lambda_function.restApiLambdaFunction.arn
input = "{\"httpMethod\": \"SCHEDULE\", \"path\": \"warmup\"}"
}
resource "aws_lambda_permission" "warmup-permission" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.restApiLambdaFunction.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.warmup.arn
}
The closest thing i have seen as a solution is passing JAVA_OPTS to a docker image increasing the available memory but not sure how to do that via a terraform. Although not sure if that would solve the problem. Any guidance would be greatly appreciated.
CodePudding user response:
As per my comment, there is a memory_size
option [1] in the aws_lambda_function
resource. If not defined it will default to 128MB. It probably needs to be increased in order to avoid OOM error.