I have an S3 bucket with the following "folder" structure:
Bucket1----> /Partner1 ----> /Client1 ----> /User1
| | |--> /User2
| |
| |--> /Client2 ----> /User1
|
|--> /Partner2 ----> /Client1 ----> /User1
and so on.
I'm trying to setup replication from this bucket to another such that a file placed in
Bucket1/Partner1/client1/User1/
should replicate to
Bucket2/Partner1/client1/User1/
,
Bucket1/Partner2/client1/User2/
should replicate to
Bucket2/Partner2/client1/User2/
,
and so on.
I'm trying to achieve this with the following terraform code:
locals {
s3_folders = [
"Partner1/client1/User1",
"Partner1/client1/User2",
"Partner1/client1/User3",
"Partner1/client1/User4",
"Partner1/client1/User5",
"Partner1/client2/User1",
"Partner1/client3/User1",
"Partner2/client1/User1",
"Partner3/client1/User1"
]
}
resource "aws_s3_bucket_replication_configuration" "replication" {
for_each = local.s3_input_folders
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
rule {
id = each.value
filter {
prefix = each.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
This is not looping and creating 10 different rules, rather it overwrites the same rule on every run and I only get one rule as a result.
CodePudding user response:
You should use dynamic block:
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = toset(local.s3_input_folders)
content {
id = rule.value
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
CodePudding user response:
Thanks, Marcin. The dynamic block construct you mentioned works to create the content blocks but it fails to apply because AWS needs multiple replication rules to be differentiated by priority. So some slight modifications achieve this:
locals {
s3_input_folders_list_counter = tolist([
for i in range(length(local.s3_input_folders)) : i
])
s3_input_folders_count_map = zipmap(local.s3_input_folders_list_counter, tolist(local.s3_input_folders))
}
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = local.s3_input_folders_count_map
content {
id = rule.key
priority = rule.key
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
which creates rules like these:
rule {
id = "0"
priority = 0
status = "Enabled"
...
}
rule {
id = "1"
priority = 1
status = "Enabled"
...
}
and so on...