There's a need to create an AWS security group rule using Terraform, then triggering a null resource.
E.G.
- health_blue. (aws_security_group_rule)
- wait_for_healthcheck. (null_resource)
I have already tried adding a dependency between the security group rule and the null resource, but the null resource is not always triggered, or it not triggered before the rule is created or destroyed.
The null resource needs to be triggered when the security group rule is created, amended, or destroyed.
Here is the config:
resource "aws_security_group_rule" "health_blue" {
count = data.external.blue_eks_cluster.result.cluster_status == "online" ? 1 : 0
description = "LB healthchecks to blue cluster"
cidr_blocks = values(data.aws_subnet.eks_gateway).*.cidr_block
from_port = 80
protocol = "tcp"
security_group_id = data.aws_security_group.blue_cluster_sg[0].id
to_port = 80
type = "ingress"
}
resource "null_resource" "wait_for_healhtcheck" {
triggers = {
value = aws_security_group_rule.health_blue[0].id
}
provisioner "local-exec" {
command = "echo 'Waiting for 25 seconds'; sleep 25"
}
depends_on = [aws_security_group_rule.health_blue]
}
Any tips or pointers would be much appreciated :~)
CodePudding user response:
With the configuration you showed here, null_resource.wait_for_healhtcheck
depends on aws_security_group_rule.health_blue
.
(You currently have that dependency specified redundantly: the reference to aws_security_group_rule.health_blue
in triggers
already establishes the dependency, so the depends_on
argument is doing nothing here and I would suggest removing it.)
The general meaning of a dependency in Terraform is that any actions taken against the dependent object must happen after any actions taken against its dependency. In your case, Terraform guarantees that after it has created the plan if there are any actions planned for both of these resources then the action planned for aws_security_group_rule.health_blue
will always happen first during the apply step.
You are using the triggers
argument of null_resource
, which adds an additional behavior that's implemented by the hashicorp/null
provider rather than by Terraform Core itself: during planning, null_resource
will compare the triggers
value from the prior state with the triggers
value in the current configuration and if they are different then it will propose the action of replacing the (purely conceptual) null_resource
object.
Because triggers
includes aws_security_group_rule.health_blue[0].id
, triggers
will take on a new value each time the security group is planned for creation or replacing. Therefore taken altogether your configuration declares the following:
- There are either zero or one
aws_security_group_rule.health_blue
objects. - Each time the
id
attribute of the security group changes, thenull_resource.wait_for_healhtcheck
must be replaced. - Whenever creating or replacing
null_resource.wait_for_healhtcheck
, run the given provisioner. - Therefore, if the
id
attribute of the security group changes there will always be both a plan to create (or replace)aws_security_group_rule.health_blue
and a plan to replacenull_resource.wait_for_healhtcheck
. The dependency rules mean that the creation of the security group will happen before the creation of thenull_resource
, and therefore before running the provisioner.
Your configuration as shown therefore seems to meet your requirements as stated. However, it does have one inconsistency which could potentially cause a problem: you haven't accounted for what ought to happen if there are zero instances of aws_security_group_rule.health_blue
. In that case aws_security_group_rule.health_blue[0].id
isn't valid because there isn't a zeroth instance of that resource to refer to.
To address that I would suggest a simplification: the null_resource
resource isn't really adding anything here that you couldn't already do with the aws_security_group_rule
resource directly:
resource "aws_security_group_rule" "health_blue" {
count = data.external.blue_eks_cluster.result.cluster_status == "online" ? 1 : 0
description = "LB healthchecks to blue cluster"
cidr_blocks = values(data.aws_subnet.eks_gateway).*.cidr_block
from_port = 80
protocol = "tcp"
security_group_id = data.aws_security_group.blue_cluster_sg[0].id
to_port = 80
type = "ingress"
provisioner "local-exec" {
command = "echo 'Waiting for 25 seconds'; sleep 25"
}
}
Provisioners for a resource run as part of the creation action for that resource, and changing the configuration of a security group rule should cause it to get recreated, so with the above configuration the sleep 25
command will run each time an instance of the security group rule is created, without any need for a separate resource.
This solution does assume that you only need to run sleep 25
when creating (or replacing) the security group rule. The null_resource
approach would be needed if the goal were to run a provisioner in response to updating some other resource, because in that case the null_resource
resource would act as a sort of adapter to allow treating an update of any value in triggers
to be treated as if it were a replacement of a resource.
CodePudding user response:
Can check depends_on, if not yet. set depends_on in those module or resource where a resource creation is depends on other resource creation or fully functional.