Home > Back-end >  Terraform importing S3 module not working as expected
Terraform importing S3 module not working as expected

Time:06-28

After enabling WORM on one of our AWS S3 buckets, terraform is no longer letting me deploy any changes to it since it already exists.

For context, we have a remote state in S3, but that is not the bucket being affected, and we are using the terraform-aws-modules/s3-bucket/aws module for our s3 buckets.

The command I ran initially was

terraform -chdir=infrastructure/wazuh_app/resources import -config=../resources -var-file=../config/stage/terraform.tfvars "module.wazuh_app.module.wazuh_log_archive.module.bucket.aws_s3_bucket.this[0]" [BUCKET NAME]

but on running that I received the error:

error creating S3 Bucket ([BUCKET NAME]): BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.

So I removed it from the state and tried to reimport it running the above command and the one below before that:

terraform -chdir=infrastructure/wazuh_app/resources state rm module.wazuh_app.module.wazuh_log_archive.module.bucket.aws_s3_bucket.this[0]

However, after trying to apply the changes again, I get the error creating bucket issue again.

As asked, here's the code that is being used at the wazuh_log_archive level:

module "bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "3.3.0"

  bucket = "${var.name_prefix}-${var.log_bucket_name}"
  acl    = "private"

  force_destroy = true

  versioning = {
    enabled = true
  }

  server_side_encryption_configuration = {
    rule = {
      bucket_key_enabled = true

      apply_server_side_encryption_by_default = {
        kms_master_key_id = module.kms_wazuh_archive_key.key_arn
        sse_algorithm     = "aws:kms"
      }
    }
  }

  lifecycle_rule = [
    {
      id      = "[ID]"
      enabled = true

      expiration = {
        days = var.s3_retention_period
      }
    }
  ]
}

resource "aws_s3_bucket_object_lock_configuration" "worm_configuration" {
  bucket = module.bucket.s3_bucket_id

  rule {
    default_retention {
      mode = "GOVERNANCE"
      days = var.worm_retention
    }
  }
  token = var.token_required ? data.aws_ssm_parameter.worm_token.value : null
}

data "aws_ssm_parameter" "worm_token" {
  name = "/${var.name_prefix}-${var.log_bucket_name}/worm-token"
}

In the parent modules its called in this chain:

module "wazuh_log_archive" {
  source = "[wazuh_log_archive SOURCE]"

  log_bucket_name     = var.log_bucket_name
  name_prefix         = var.name_prefix
  namespace           = var.namespace
  retention_period    = var.retention_period
  s3_retention_period = var.s3_retention_period
  worm_retention      = var.worm_retention
  token_required      = var.token_required

  depends_on = [
    module.wazuh_shared_resources
  ]
}
module "wazuh_app" {
  source = "[wazuh_app SOURCE]"
  worm_retention = var.worm_retention
  token_required = var.token_required
}

I am at a loss. I know I am importing the correct bucket, and I know that I am removing the correct bucket from the state as I've verified with the output of terraform apply, and the state list option.

Anyone have nay clue what it could be?

CodePudding user response:

The issue was not an import issue.

After running some tests, I found that terraform was trying to recreate my S3 bucket every single time I ran an apply. This didn't make sense as there had been no changes to the terraform source code between it's first deploy and its current state.

After looking through the terraform output for a while and trying to change the terraform at different levels of the codebase I found the issue...

  # module.wazuh_app.module.wazuh_log_archive.module.bucket.aws_s3_bucket.this[0] must be replaced
 /- resource "aws_s3_bucket" "this" {
        acceleration_status         = (known after apply)
        acl                         = (known after apply)
      ~ arn                         = "[BUCKET ARN]" -> (known after apply)
      ~ bucket_domain_name          = "[BUCKET DOMAIN NAME]" -> (known after apply)
      ~ bucket_regional_domain_name = "[BUCKET REGIONAL DOMAIN NAME]" -> (known after apply)
        force_destroy               = true
      ~ hosted_zone_id              = "[ID]" -> (known after apply)
      ~ id                          = "[BUCKET ID]" -> (known after apply)
      ~ object_lock_enabled         = true -> false # forces replacement
        policy                      = (known after apply)
      ~ region                      = "eu-west-1" -> (known after apply)
      ~ request_payer               = "BucketOwner" -> (known after apply)
      - tags                        = {} -> null
        website_domain              = (known after apply)
        website_endpoint            = (known after apply)
        # (2 unchanged attributes hidden)

The large amount of terraform logs being produced meant that I missed a tiny little line in the output, this line being:

      ~ object_lock_enabled         = true -> false # forces replacement

What the aws_s3_bucket module documentation didn't make clear is that once you add a WORM token to a bucket, you have to add an additional parameter to the module for it to remember that it is using WORM in future deployments. However, this parameter was not necessary on the deployment where you associate a token to the bucket.

This is not only not mentioned, but also slightly illogical as the bucket had a object_lock_configuration defined for it, which was a required step to enable WORM on an already existing bucket on initial redeploy. This slight inconsistency is quite frustrating.

The overall conclusion is that I missed a line in a terraform deploy output, and paid for it in a couple of hours of lost work.

  • Related