I had old Terraform configuration, worked perfect. In short, I had static website application I needed to deploy using Cloudfront & S3. Then, I need another application to deploy in the same way, but in other sub-domain.
For ease of helping, you can check the full source code here:
Old Terraform configuration: https://github.com/tal-rofe/tf-old
New Terraform configuration: https://github.com/tal-rofe/tf-new
So, my domain is example.io
, and in the old configuration I had only static application deployed on app.example.com
.
But, as I need an another application, it's going to be deployed on docs.example.com
.
To avoid a lot of code duplication, I decided on creating a local module for deploying a generic application onto Cloudfront & S3.
After doing so, seems like terraform apply
and terraform plan
succeeds (not really, as no resources were changed at all!): Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Not only no changes, but I get an old output:
cloudfront_distribution_id = "blabla"
eks_kubeconfig = <sensitive>
This cloudfront_distribution_id
output, was the correct output using the old configuration. I expect to get these new outputs, as configured:
output "frontend_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront frontend"
value = module.frontend-static.cloudfront_distribution_id
}
output "docs_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront docs"
value = module.docs-static.cloudfront_distribution_id
}
output "eks_kubeconfig" {
description = "EKS Kubeconfig content"
value = module.eks-kubeconfig.kubeconfig
sensitive = true
}
I'm using GitHub actions to apply my Terraform configuration with these steps:
- name: Terraform setup
uses: hashicorp/setup-terraform@v2
with:
terraform_wrapper: false
- name: Terraform core init
env:
TERRAFORM_BACKEND_S3_BUCKET: ${{ secrets.TERRAFORM_BACKEND_S3_BUCKET }}
TERRAFORM_BACKEND_DYNAMODB_TABLE: ${{ secrets.TERRAFORM_BACKEND_DYNAMODB_TABLE }}
run: |
terraform -chdir="./terraform/core" init \
-backend-config="bucket=$TERRAFORM_BACKEND_S3_BUCKET" \
-backend-config="dynamodb_table=$TERRAFORM_BACKEND_DYNAMODB_TABLE" \
-backend-config="region=$AWS_REGION"
- name: Terraform core plan
run: terraform -chdir="./terraform/core" plan -no-color -out state.tfplan
- name: Terraform core apply
run: terraform -chdir="./terraform/core" apply state.tfplan
I used the same steps in my old & new Terraform configurations.
I want to re-use the logic written in my static-app
module twice. So basically I want to be able to create static application just by using the module I've configured.
CodePudding user response:
You cannot define the outputs in the root module and expect it to work because you are already using a different module in your static-app
module (i.e., you are nesting modules). Since you are using the terraform module there (denoted with source = "terraform-aws-modules/cloudfront/aws"
) you are limited to what that module provides as outputs and hence can only define those outputs on the module level, not root level. I see you are referencing the EKS output works, but the difference here is that that particular module is not nested and is called directly (from your repo):
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
.
.
.
}
The way I would suggest fixing this is to call the Cloudfront module from the root module (i.e., core
in your example):
module "frontend-static" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.1.0"
... rest of the configuration ...
}
module "docs-static" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.1.0"
... rest of the configuration ...
}
The outputs you currently have defined in your repo with new configuration (tf-new
) should work out-of-the-box with this change. Alternatively, you could write your own module and then you can control which outputs you will have.