Home > database >  Importing existing AWS Resources using Terraform Module
Importing existing AWS Resources using Terraform Module

Time:01-03

I am trying to import the existing S3 bucket using terraform module.I am able to import it successfully but the issue I am facing now is that after successful import when I ran terraform plan command it still showing that its going to create the resource again. It would be great if some one could help me what I am doing wrong here.

My Module:

module "log_s3" {
  source               = "../modules/s3/"
  env_name             = var.env_name
  bucket_name          = "${var.product_name}-logs-${var.env_name}"
  enable_versioning    = false
  enable_cors          = false
  logging_bucket       = module.log_s3.log_bucket_id
  enable_bucket_policy = true
  enable_static_site   = false
}

My resource:

resource "aws_s3_bucket" "my_protected_bucket" {
  bucket = var.bucket_name
  tags = {
    environment              = var.env_name
  }
}

resource "aws_s3_bucket_acl" "my_protected_bucket_acl" {
  bucket = aws_s3_bucket.my_protected_bucket.id
  acl    = var.enable_static_site == true ? "public-read" : "private"
}

resource "aws_s3_bucket_public_access_block" "my_protected_bucket_access" {
  bucket = aws_s3_bucket.my_protected_bucket.id

  # Block public access
  block_public_acls       = var.enable_static_site == true ? false : true
  block_public_policy     = var.enable_static_site == true ? false : true
  ignore_public_acls      = var.enable_static_site == true ? false : true
  restrict_public_buckets = var.enable_static_site == true ? false : true
}

resource "aws_s3_bucket_versioning" "my_protected_bucket_versioning" {
  count  = var.enable_versioning ? 1 : 0
  bucket = aws_s3_bucket.my_protected_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_cors_configuration" "my_protected_bucket_cors" {
  count  = var.enable_cors ? 1 : 0
  bucket = aws_s3_bucket.my_protected_bucket.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST", "DELETE", "GET", "HEAD"]
    allowed_origins = ["*"]
    expose_headers  = [""]
  }
  lifecycle {
    ignore_changes = [
      cors_rule
    ]
  }

}

resource "aws_s3_bucket_ownership_controls" "my_protected_bucket_ownership" {
  bucket = aws_s3_bucket.my_protected_bucket.id

  rule {
    object_ownership = "ObjectWriter"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "my_protected_bucket_sse_config" {
  bucket = aws_s3_bucket.my_protected_bucket.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}


resource "aws_s3_bucket_policy" "my_protected_bucket_policy" {
  count  = var.enable_bucket_policy ? 1 : 0
  bucket = aws_s3_bucket.my_protected_bucket.id
  policy = <<EOF
{
    "Version": "2012-10-17",
    "Id": "S3-Console-Auto-Gen-Policy-1659086042176",
    "Statement": [
        {
            "Sid": "S3PolicyStmt-DO-NOT-MODIFY-1659086041783",
            "Effect": "Allow",
            "Principal": {
                "Service": "logging.s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "${aws_s3_bucket.my_protected_bucket.arn}/*"
        }
    ]
}
EOF
}

resource "aws_s3_object" "my_protected_bucket_object" {
  bucket = var.logging_bucket
  key    = "s3_log/${aws_s3_bucket.my_protected_bucket.id}/"
}

resource "aws_s3_bucket_logging" "my_protected_bucket_logging" {
  bucket = aws_s3_bucket.my_protected_bucket.id
  target_bucket = var.logging_bucket
  target_prefix = "s3_log/${aws_s3_bucket.my_protected_bucket.id}/"
  depends_on    = [aws_s3_bucket.my_protected_bucket, aws_s3_object.my_protected_bucket_object]
}

resource "aws_s3_bucket_website_configuration" "my_protected_bucket_static" {
  bucket = aws_s3_bucket.my_protected_bucket.id
  count  = var.enable_static_site ? 1 : 0

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

output.tf

output "log_bucket_id" {
  value = aws_s3_bucket.my_protected_bucket.id

Terraform Import command: I ran the below command to import the bucket

terraform import module.log_s3.aws_s3_bucket.my_protected_bucket abcd-logs-dev

output :

module.log_s3.aws_s3_bucket.my_protected_bucket: Import prepared!
  Prepared aws_s3_bucket for import
module.log_s3.aws_s3_bucket.my_protected_bucket: Refreshing state... [id=abcd-logs-deveu]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Terraform plan:

After successfull import ..when I ran terraform plan command its showing that terraform going to create new resources

module.log_s3.aws_s3_bucket.my_protected_bucket: Refreshing state... [id=abcd-logs-dev]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
    create

Terraform will perform the following actions:

  # module.log_s3.aws_s3_bucket_acl.my_protected_bucket_acl will be created
    resource "aws_s3_bucket_acl" "my_protected_bucket_acl" {
        acl    = "private"
        bucket = "abcd-logs-dev"
        id     = (known after apply)

        access_control_policy {
            grant {
                permission = (known after apply)

                grantee {
                    display_name  = (known after apply)
                    email_address = (known after apply)
                    id            = (known after apply)
                    type          = (known after apply)
                    uri           = (known after apply)
                }
            }

            owner {
                display_name = (known after apply)
                id           = (known after apply)
            }
        }
    }

  # module.log_s3.aws_s3_bucket_logging.my_protected_bucket_logging will be created
    resource "aws_s3_bucket_logging" "my_protected_bucket_logging" {
        bucket        = "abcd-logs-dev"
        id            = (known after apply)
        target_bucket = "abcd-logs-dev"
        target_prefix = "s3_log/abcd-logs-dev/"
    }

  # module.log_s3.aws_s3_bucket_ownership_controls.my_protected_bucket_ownership will be created
    resource "aws_s3_bucket_ownership_controls" "my_protected_bucket_ownership" {
        bucket = "abcd-logs-dev"
        id     = (known after apply)

        rule {
            object_ownership = "ObjectWriter"
        }
    }

  # module.log_s3.aws_s3_bucket_policy.my_protected_bucket_policy[0] will be created
    resource "aws_s3_bucket_policy" "my_protected_bucket_policy" {
        bucket = "abcd-logs-dev"
        id     = (known after apply)
        policy = jsonencode(
            {
                Id        = "S3-Console-Auto-Gen-Policy-145342356879"
                Statement = [
                    {
                        Action    = "s3:PutObject"
                        Effect    = "Allow"
                        Principal = {
                            Service = "logging.s3.amazonaws.com"
                        }
                        Resource  = "arn:aws:s3:::abcd-logs-dev/*"
                        Sid       = "S3PolicyStmt-DO-NOT-MODIFY-145342356879"
                    },
                ]
                Version   = "2012-10-17"
            }
        )
    }

  # module.log_s3.aws_s3_bucket_public_access_block.my_protected_bucket_access will be created
    resource "aws_s3_bucket_public_access_block" "my_protected_bucket_access" {
        block_public_acls       = true
        block_public_policy     = true
        bucket                  = "abcd-logs-dev"
        id                      = (known after apply)
        ignore_public_acls      = true
        restrict_public_buckets = true
    }

  # module.log_s3.aws_s3_bucket_server_side_encryption_configuration.my_protected_bucket_sse_config will be created
    resource "aws_s3_bucket_server_side_encryption_configuration" "my_protected_bucket_sse_config" {
        bucket = "abcd-logs-dev"
        id     = (known after apply)

        rule {
            apply_server_side_encryption_by_default {
                sse_algorithm = "AES256"
            }
        }
    }

  # module.log_s3.aws_s3_object.my_protected_bucket_object will be created
    resource "aws_s3_object" "my_protected_bucket_object" {
        acl                    = "private"
        bucket                 = "abcd-logs-dev"
        bucket_key_enabled     = (known after apply)
        content_type           = (known after apply)
        etag                   = (known after apply)
        force_destroy          = false
        id                     = (known after apply)
        key                    = "s3_log/abcd-logs-dev/"
        kms_key_id             = (known after apply)
        server_side_encryption = (known after apply)
        storage_class          = (known after apply)
        tags_all               = (known after apply)
        version_id             = (known after apply)
    }

Plan: 7 to add, 0 to change, 0 to destroy.

It would be great if some one could help out what I am doing wrong. Help is much appreciated.

Thanks

CodePudding user response:

The resource you imported is of type log_s3.aws_s3_bucket and named my_protected_bucket. There is no resource of type log_s3.aws_s3_bucket listed in the Terraform plan output. It correctly imported the S3 bucket resource and is not trying to create a new S3 bucket.

The resource types the Terraform plan say it is going to create are:

  • log_s3.aws_s3_bucket_acl
  • log_s3.aws_s3_bucket_logging
  • log_s3.aws_s3_bucket_ownership_controls
  • log_s3.aws_s3_bucket_policy
  • log_s3.aws_s3_bucket_public_access_block
  • log_s3.aws_s3_object

You haven't imported any of those resources yet. You still need to import each of those resources.

CodePudding user response:

Yes, the simple problem here is that you are only importing S3 bucket resource into your state.

When you use a module, it's not enough to just import a single resource within that module. You have to run import command for all the resources present in that module.

You are currently running below import command.

terraform import module.log_s3.aws_s3_bucket.my_protected_bucket abcd-logs-dev

This is importing only S3 bucket into your state. But if you look at your module, you have other resources as well. So, you have to run similar import commands for other resources as well which are present in your module like below.

terraform import module.log_s3.aws_s3_bucket_acl.my_protected_bucket_acl abcd-logs-dev

Please check below for s3 bucket acl import https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_acl#import

Similarly, run the import command for all the resources in your module and then run terraform plan. It will work.

  • Related