Home > Software engineering >  How can we avoid existing EBS volumes from being deleted?
How can we avoid existing EBS volumes from being deleted?

Time:07-01

I'm using Terraform 1.1.3 with aws provider 3.75.2 to create TF code for the existing 2-node infra. The code snippet is like below:

module:
resource "aws_ebs_volume" "backend-logs" {
  count = var.create_ebs_log_volumes ? var.backend_nodes_qty : 0

  availability_zone = element(data.aws_subnet.backend.*.availability_zone, count.index)
  size              = var.volume_log_size
  type              = var.ebs_volume_type
  encrypted         = var.ebs_enable_encryption
  kms_key_id        = var.ebs_encryption_key_id
}
resource "aws_volume_attachment" "backend-logs" {
  count       = var.backend_nodes_qty
  device_name = "/dev/sdf"
  volume_id   = element(module.xyz.backend_ebs_volume_log_ids, count.index)
  instance_id = element(module.xyz.backend_instance_ids, count.index)
}

and I've imported the instance/volume/attachment resources successfully.

terraform import module.xyz.aws_ebs_volume.backend-logs[0] vol-0123456789abcedf0
terraform import module.xyz.aws_ebs_volume.backend-logs[1] vol-0123456789abcedf1
terraform import aws_volume_attachment.backend-logs[0] /dev/sdf:vol-0123456789abcedf0:i-0123456789abcedf0
terraform import aws_volume_attachment.backend-logs[1] /dev/sdf:vol-0123456789abcedf1:i-0123456789abcedf1

When I run terraform plan, the plan tells me that the volumes are going to be destroyed. How can we avoid that? thanks

  # aws_volume_attachment.backend-logs[0] must be replaced
-/  resource "aws_volume_attachment" "backend-logs" {
      ~ id          = "vai-1993905001" -> (known after apply)
      ~ volume_id   = "vol-0123456789abcedf0" -> (known after apply) # forces replacement
        # (2 unchanged attributes hidden)
    }

  # aws_volume_attachment.backend-logs[1] must be replaced
-/  resource "aws_volume_attachment" "backend-logs" {
      ~ id          = "vai-1955292002" -> (known after apply)
      ~ volume_id   = "vol-0123456789abcedf1" -> (known after apply) # forces replacement
        # (2 unchanged attributes hidden)
    }
    
  # module.xyz.aws_ebs_volume.backend-logs[0] must be replaced
-/  resource "aws_ebs_volume" "backend-logs" {
      ~ arn                  = "arn:aws:ec2:us-west-2:1234567890:volume/vol-0123456789abcedf0" -> (known after apply)
      ~ availability_zone    = "us-west-2a" -> (known after apply) # forces replacement
      ~ id                   = "vol-0123456789abcedf0" -> (known after apply)
      ~ iops                 = 150 -> (known after apply)
        kms_key_id           = (known after apply)
      - multi_attach_enabled = false -> null
        snapshot_id          = (known after apply)
      ~ throughput           = 0 -> (known after apply)
        # (3 unchanged attributes hidden)
    }

  # module.xyz.aws_ebs_volume.backend-logs[1] must be replaced
-/  resource "aws_ebs_volume" "backend-logs" {
      ~ arn                  = "arn:aws:ec2:us-west-2:1234567890:volume/vol-0123456789abcedf1" -> (known after apply)
      ~ availability_zone    = "us-west-2b" -> (known after apply) # forces replacement
      ~ id                   = "vol-0123456789abcedf1" -> (known after apply)
      ~ iops                 = 150 -> (known after apply)
        kms_key_id           = (known after apply)
      - multi_attach_enabled = false -> null
        snapshot_id          = (known after apply)
      ~ throughput           = 0 -> (known after apply)
        # (3 unchanged attributes hidden)
    }

CodePudding user response:

It seems that the issue relates to AZ stuff. You can try the workaround by adding these lines in aws_instance block.

lifecycle {
    ignore_changes = [ availability_zone ]
  }
  • Related