Home > Blockchain >  Terraform conditional `for_each` with downstream dependencies
Terraform conditional `for_each` with downstream dependencies

Time:10-29

Given some conditional/filtered for_each statement, how can I use the remaining objects for downstream dependencies?

Note: Terraform 0.13.7

For example, if a user does not have an s3 bucket terraform should create one and set it's notification policy. If they do have a bucket then terraform should look up the bucket and set it's notification.

So far i've tried formatted my payload like such:

"snowpipes": {
  . . .
  "create_staging_bucket": true,
  "staging_bucket": {
    "name": "existing-bucket-deployment",
    "url": "old-dirty-bucket",
    "arn": "arn:aws:s3:::old-dirty-bucket"
  },
  . . .
}

And then constructed my terraform like so:

resource "aws_s3_bucket" "staging_bucket" {
  for_each = {for k, v in var.snowpipes : k => v if v.create_staging_bucket == true}
  bucket = lower(each.value.staging_bucket.url)
}

resource "aws_s3_bucket_notification" "bucket_notification" {
  for_each = var.snowpipes
  bucket = aws_s3_bucket.staging_bucket[each.key].id
  . . .
}

but then I get errors like this, suggesting that the given key was filtered out:

Error: Invalid index

  on main.tf line 504, in resource "aws_s3_bucket_notification" "bucket_notification":
 504:   bucket = aws_s3_bucket.staging_bucket[each.key].id
    |----------------
    | aws_s3_bucket.staging_bucket is object with no attributes
    | each.key is "existing-bucket-deployment"

The given key does not identify an element in this collection value.

Not sure if there's a way to swap back and forth between a resource and a data object?

CodePudding user response:

I would typically recommend keeping a shared module simpler by making a hard decision about whether creating the bucket is part of its scope or not, and then having the calling module always declare its own S3 bucket if you decide that the S3 bucket is not part of its scope, but I can also see that sometimes it's convenient to be flexible in this way, and it is possible to do so at the expense of some extra complexity in the configuration.

Let's start by showing the variable declaration I'm going to assume for the rest of this:

variable "snowpipes" {
  type = map(object({
    create_staging_bucket = bool
    staging_bucket = object({
      name = string
      url  = string
      arn  = string
    })
    # (and whatever else you need, immaterial to this question)
  }))
}

Next let's declare the aws_s3_bucket resource for the subset of these elements that have create_staging_bucket set, which is the same as what you wrote already:

resource "aws_s3_bucket" "staging_bucket" {
  for_each = {
    for k, v in var.snowpipes : k => v
    if v.create_staging_bucket == true
  }

  bucket = lower(each.value.staging_bucket.url)
}

So far I hope I've just repeated essentially what you already had. My next step would be to merge the results of this resource into the settings from the original variable in order to create a flat map of all of the staging buckets, regardless of whether they were created here or not:

locals {
  staging_buckets = merge(
    { for k, sp in var.snowpipes : k => sp.staging_bucket }
    {
      for k, b in aws_s3_bucket.staging_bucket : k => {
        name = b.bucket
        url  = b.bucket # (not sure about this, but following your example above)
        arn  = b.arn
      }
    }
  }
}

Now we're back to a map that has all of the same keys as we started with in var.snowpipes, where some of the elements are just verbatim what was in the input and others are synthetic based on the resource we declared. Due to the priority behavior of merge, it'll prefer to use a key from the second map rather than from the first map wherever the map keys collide.

We can use that for the bucket notification resource:

resource "aws_s3_bucket_notification" "bucket_notification" {
  for_each = local.staging_buckets

  bucket = each.value.name
  # ...
}
  • Related