Home > other >  What and how should I provision Terraform remote state S3 bucket and state locking DynamoDB table?
What and how should I provision Terraform remote state S3 bucket and state locking DynamoDB table?

Time:10-21

I have multiple git repositories (e.g. cars repo, garage repo) where each one deploys multiple AWS services/resources using terraform .tf files.

I would like for each repo to save his state in s3 remote backend, such that when a repo will deploy its resources from the prod or dev workspace the state will be kept in the correct s3 bucket (prod/dev).

The S3 buckets and folders will look something like:

# AWS Prod bucket
terraform_prod_states Bucket:

 - Path1: /cars/cars.tfstate 
 - Path2: /garage/garage.tfstate 

# AWS Dev bucket    
terraform_dev_states Bucket:

 - Path1: /cars/cars.tfstate 
 - Path2: /garage/garage.tfstate 

But prior to having repos deploying and saving state in remote backend - The S3 buckets and permissions need to be set.

The question ?

Who should set the S3 buckets/permissions/dynamodb tables (for locking)? What will be best practice?

Options:

  1. Should the S3 buckets and tables be created one time manually from AWS management console?

  2. Should I have a separate repo that is responsible for preparing all the required AWS infrastructure: buckets/permissions/dynamodb (in this case, I assume that the infra repo should also preserve a remote state and locking - who should do that ?)

  3. Should every repo (cars, garage) will take care of checking if S3 and dynamodb tables exists and if required to prepare the remote state resources for his own use ?

Feels like chicken and egg here.

CodePudding user response:

You can add it to tf script like:

resource "aws_s3_bucket" "b" {
  bucket = lower("${terraform.workspace}.mysite${var.project_name}")
  acl    = "private"
  tags = {
    Environment = "${terraform.workspace}"
  }
}

but much more settings related to security groups etc. required this all is in documnetation and also can see this tutorial- https://www.bacancytechnology.com/blog/aws-s3-bucket-using-terraform

CodePudding user response:

Should the S3 buckets and tables be created one time manually from AWS management console?

Not necessarily - I would advise creating a separate folder that has a Terraform configuration for managing the state (one-time setup). I'd call the bucket tfstate but there are no restrictions whatsoever other than required IAM permissions. Also, use lifecycle to prevent the bucket from being destroyed via Terraform & enable versioning. If you would like state locking and consistency, also specify a DynamoDB table with a primary key named LockID with type of string (this is required) and specify the name when using your remote state.

Who should set the S3 buckets/permissions/dynamodb tables (for locking)? What will be best practice?

You will, inside the one-time TF state setup. I would keep it as part of the same repository that holds the other TF configurations instead of creating a new repository for a single TF file (which is very likely bound to not change).

hould every repo (cars, garage) will take care of checking if S3 and dynamodb tables exists and if required to prepare the remote state resources for his own use ?

No, don't do any checking - as I've mentioned, just do it manually.

You will always have a chicken and egg problem as you've noticed so do it manually but the state for a single S3 bucket is going to be extremely minimal and can be imported into the state if needs be.


The goal here is to isolate that chicken-egg problem to a situation where the chicken won't be laying any more eggs so to speak - you only need to set up the remote state once and rarely is it changed.

Still much better than manually creating buckets.

  • Related