Home > Back-end >  terraform - mount EFS to ECS fargate: No such file or directory
terraform - mount EFS to ECS fargate: No such file or directory

Time:11-24

I am trying to mount a persistent volume to a container but the container doesn't start because of the error: "No such file or directory".

Here the relevant configuration:

# EFS
resource "aws_security_group" "allow_nfs_inbound" {
  name   = "${local.resource_prefix}-allow-nfs-inbound"
  vpc_id = module.vpc.vpc_id

  ingress {
    from_port        = 2049
    to_port          = 2049
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "all"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_efs_file_system" "persistent" {
  creation_token = "${local.resource_prefix}-efs"
  encrypted      = true
}

resource "aws_efs_mount_target" "mount_targets" {
  count = length(module.vpc.private_subnets)

  file_system_id  = aws_efs_file_system.persistent.id
  subnet_id       = module.vpc.private_subnets[count.index]
  security_groups = [aws_security_group.allow_nfs_inbound.id]
}

resource "aws_efs_access_point" "assets_access_point" {
  file_system_id = aws_efs_file_system.persistent.id

  root_directory {
    path = "/assets"

    creation_info {
      owner_gid   = 0
      owner_uid   = 0
      permissions = "755"
    }
  }
}

resource "aws_efs_access_point" "shared_access_point" {
  file_system_id = aws_efs_file_system.persistent.id

  root_directory {
    path = "/shared"

    creation_info {
      owner_gid   = 0
      owner_uid   = 0
      permissions = "755"
    }
  }
}

# ECS
resource "aws_ecs_task_definition" "backend_task_definition" {
  ...

  container_definitions = jsonencode(
    [

      {
        ...
        mountPoints = [
          {
            sourceVolume  = "assets"
            containerPath = "/app/assets"
            readOnly      = false
          },
          {
            sourceVolume  = "shared"
            containerPath = "/app/shared"
            readOnly      = false
          }
        ]
        volumesFrom = []
        ...
      }
    ]

  volume {
    name = "assets"

    efs_volume_configuration {
      file_system_id = aws_efs_file_system.persistent.id
      root_directory = "/assets"
    }
  }

  volume {
    name = "shared"

    efs_volume_configuration {
      file_system_id = aws_efs_file_system.persistent.id
      root_directory = "/shared"
    }
  }
  ...
}

resource "aws_security_group" "allow_efs" {
  name   = "${local.resource_prefix}-allow-efs"
  vpc_id = module.vpc.vpc_id

  ingress {
    from_port        = 2049
    to_port          = 2049
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

resource "aws_ecs_service" "backend_ecs_service" {

  ...

  network_configuration {
    subnets = setunion(
      module.vpc.public_subnets
    )
    security_groups = [aws_security_group.allow_efs.id]
    assign_public_ip = true
  }

  ...
}

The complete error message is:

Resourceinitializationerror: failed to invoke EFS utils commands to set up EFS volumes: stderr: b'mount.nfs4: mounting :/assets failed, reason given by server: No such file or directory' : unsuccessful EFS utils command execution; code: 32

The network configuration should be correct because if I remove the security groups I get a different network related error.

EDIT:

Added docker RUN:

RUN apt-get update && \
    apt-get -y install git binutils && \
    git clone https://github.com/aws/efs-utils && \
    cd efs-utils && \
    ./build-deb.sh && \
    apt-get -y install ./build/amazon-efs-utils*deb

RUN wget https://bootstrap.pypa.io/pip/3.5/get-pip.py -O /tmp/get-pip.py  && \
    python3 /tmp/get-pip.py && \
    pip3 install botocore

CodePudding user response:

I'm really not sure why you are getting that specific error. You might want to make sure you have platform_version = "1.4.0" in your aws_ecs_service resource definition. I think 1.4.0 is the default now, but you aren't showing your entire resource definition, and if you had it set to something like 1.3.0 that could cause this issue.

Also makes sure you have launch_type = "FARGATE" in the aws_ecs_service resource definition (again I'm having to guess, because you didn't include the full code). That error really sounds to me more like an issue that would happen with EC2 deployments running a custom EC2 AMI, than with Fargate deployments. So double check you are really deploying to Fargate.

Also, you aren't configuring ECS to use the EFS access point you have created. You should change your volume blocks to look like this;

  volume {
    name = "assets"

    efs_volume_configuration {
      file_system_id = aws_efs_file_system.persistent.id
      root_directory = "/assets"
      transit_encryption = "ENABLED"

      authorization_config {
        access_point_id = aws_efs_access_point.assets_access_point.id
        iam             = "ENABLED"
      }
    }
  }

  volume {
    name = "shared"

    efs_volume_configuration {
      file_system_id = aws_efs_file_system.persistent.id
      root_directory = "/shared"
      transit_encryption = "ENABLED"

      authorization_config {
        access_point_id = aws_efs_access_point.shared_access_point.id
        iam             = "ENABLED"
      }
    }
  }

Also, why are you doing setunion() here?

    subnets = setunion(
      module.vpc.public_subnets
    )

If your module.vpc.public_subnets are a list of sets, then you need to do setunion() everywhere you are accessing that module output variable, like in the aws_efs_mount_target resource. But if your module.vpc.public_subnets is not a list of sets then that function call is pointless.

  • Related