Home > database >  Using file function with Terraform workspace
Using file function with Terraform workspace

Time:09-24

I have a resource that am creating in Terraform. Within the resource there is an attribute that is using JSON file to read in values. I am reading in these values from a separate JSON file and want to declare the attribute in conjunction with my Terraform Workspace. Below is my resource and error message. If it is possible to integrate terraform workspaces within the file function, any insight on how to achieve this would be helpful.

Terraform Resource

resource "aws_ecs_task_definition" "task_definition" {


family                   = "${var.application_name}-${var.application_environment[var.region]}"
  execution_role_arn       = aws_iam_role.ecs_role.arn
  network_mode             = "awsvpc"
  cpu                      = "256"
  memory                   = "512"
  requires_compatibilities = ["FARGATE"]
  container_definitions    = file("scripts/ecs/${terraform.workspace}.json")
}

Terraform Error

Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition

on ecs.tf line 26, in resource "aws_ecs_task_definition" "task_definition":
  26:   container_definitions    = file("scripts/ecs/${terraform.workspace}.json")

I am looking to approach it this way because I have multiple Terraform workspaces set up and would like to keep my TF scripts as identical as possible.

Container Definition

{


"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/fargate-devstage",
          "awslogs-region": "us-east-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": [
        "[\"sh\"",
        "\"/tmp/init.sh\"]"
      ],
      "portMappings": [
        {
          "hostPort": 9003,
          "protocol": "tcp",
          "containerPort": 9003
        }
      ],
      "cpu": 0,
      "environment": [],
      "mountPoints": [],
      "volumesFrom": [],
      "image": "xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/fargate:latest",
      "essential": true,
      "name": "fargate"
    }
  ],
  "placementConstraints": [],
  "memory": "1024",
  "compatibilities": [
    "EC2",
    "FARGATE"
  ],
  "taskDefinitionArn": "arn:aws:ecs:us-east-2:xxxxxxxxxxxx:task-definition/fargate-devstage:45",
  "family": "fargate-devstage",
  "requiresAttributes": [
    {
      "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
    },
    {
      "name": "ecs.capability.execution-role-awslogs"
    },
    {
      "name": "com.amazonaws.ecs.capability.ecr-auth"
    },
    {
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
    },
    {
      "name": "ecs.capability.execution-role-ecr-pull"
    },
    {
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
    },
    {
      "name": "ecs.capability.task-eni"
    }
  ],
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "networkMode": "awsvpc",
  "cpu": "512",
  "revision": 45,
  "status": "ACTIVE",
  "volumes": []
}

CodePudding user response:

You have to provide only container definition, not entire task definition in container_definitions. So your json would be something along:

 [
    {
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/fargate-devstage",
          "awslogs-region": "us-east-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": [
        "[\"sh\"",
        "\"/tmp/init.sh\"]"
      ],
      "portMappings": [
        {
          "hostPort": 9003,
          "protocol": "tcp",
          "containerPort": 9003
        }
      ],
      "cpu": 0,
      "environment": [],
      "mountPoints": [],
      "volumesFrom": [],
      "image": "xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/fargate:latest",
      "essential": true,
      "name": "fargate"
    }
  ]

All other task related things, such as task execution role, cpu, memory, etc. must be provided directly in aws_ecs_task_definition resource, not in container_definitions.

CodePudding user response:

There are many ways you can approach this, however in my opinion the best one is using template_file data source using variable replacement

here is an example on how you can use it

data "template_file" "task_definiton" {
  template = file("${path.module}/files/task_definition.json")

  vars = {
    region                             = var.region
    secrets_manager_arn                = module.xxxx.secrets_manager_version_arn
    container_memory                   = var.container_memory
    memory_reservation                 = var.container_memory_reservation
    container_cpu                      = var.container_cpu
  }
}

resource "aws_ecs_task_definition" "task" {
  family                   = "${var.environment}-${var.app_name}"
  execution_role_arn       = aws_iam_role.ecs_task_role.arn
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = var.fargate_ec2_cpu
  memory                   = var.fargate_ec2_memory
  task_role_arn            = aws_iam_role.ecs_task_role.arn
  container_definitions    = data.template_file.task_definiton.rendered
}

note how the data source is used with the rendered method so you retrieve the actual file output with variables interpolated

data.template_file.task_definiton.rendered

For template format and more info about template files you can refer to terraform's official documentation here https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/file

Edit 1: I have to also say that if you want to do this approach you must define the variables required by your template and terraform resources for the workspace.

  • Related