Home > Software engineering >  how can i get a subsequent terraform deployment to use a vpc created in sibling module
how can i get a subsequent terraform deployment to use a vpc created in sibling module

Time:10-05

I'm trying to understand how to have the following terraform project structure allow me to stand up a vpc that can then be used in subsequent apply operations in sibling directories. For example,

├── dev
│   ├── main.tf
├── test
│   ├── main.tf
├── shared
│   ├── outputs.tf
│   ├── variables.tf
│   └── main.tf

In shared/main.tf I define

module "vpc" {
  source               = "terraform-aws-modules/vpc/aws"
  version              = "2.77.0"
  name                 = "vpc"
  cidr                 = "10.0.0.0/16"
  azs                  = ["us-west-2a"]
}

And in dev/main.tf I define (among other things)

module "shared" {
  source = "../shared"
}

resource "aws_subnet" "dev-subnet-pub" {
  vpc_id      = module.shared.vpc_id
  cidr_block  = "10.0.1.0/28"
  tags = {
    Name        = "dev-subnet-pub"
    Terraform   = "true"
    Environment = "dev"
  }
}

Both deployments are successful, but if I terraform apply in /shared and then again in /dev I end up with 2 VPCs with the same CIDR, but the most recently created one has the rest of my /dev resources in it. Expected behavior is that the resources from /dev are deployed into the VPC created from the first terraform apply (ie, I want to do this in /prod and /test sibling directories as well).

Must I use terraform workspaces?

CodePudding user response:

The usual solution is to deploy your setup in two stages. First you deploy the shared which creates the VPC. Then you pass the VPCid into /dev as an input variable during terraform apply. The variable can have a default value of null, and if the VPCid is not provided, then the /dev creates its own VPC.

Same goes for /prod or any other type of environment you want to create.

CodePudding user response:

This solution ended up working for my purposes.

After you terraform apply in /shared, in dev/main.tf define

data "terraform_remote_state" "vpc" {
  backend = "local"
    config = {
    path = "../shared/terraform.tfstate"
  }
}

resource "aws_subnet" "dev-subnet-pub" {
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id
  cidr_block  = "10.0.1.0/28"
  tags = {
    Name        = "dev-subnet-pub"
    Terraform   = "true"
    Environment = "dev"
  }
}

NOTE: There are a few variations on this question, notably this answer that makes reference to local backends.

I'm going to answer my own question and leave it to someone else to tell me if it's too similar, but I think the environment/account context is distinct. Also important to note here a caveat for local backend solutions to this question:

We do not recommend using these options in new systems, even if you are running Terraform in automation. Instead, select a different backend which supports remote state and configure it within your root module, which ensures that everyone working on your configuration will automatically retrieve and store state in the correct shared location without any special command line options.

Additionally, S3 seems to be a commonly used backend solution.

  • Related