I have a terraform variable map of objects as given below
variable "MyProj" {
type = map(object({
name = string
type = string
}))
default = {
"Proj1" = {
name = "Proj1"
programme = "java"
},
"Proj2" = {
name = "Proj2"
programme = "npm"
}
}
}
I also have a null resource code that will run a batch script whenever change is identified based on the above map(objects()) and desired state.
resource "null_resource" "nullr" {
for_each = var.MyProj
provisioner "local-exec" {
command = "bash /home/myscript.sh ${each.value.name} ${each.value.progrmme}
}
}
My intention is that whenever a change is identified in the above map() then the null resource should run.
This is running as expected when I keep the terraform state in my local machine. But when I keep the terraform state in a Azure blob container using azurem backend it is not running the null_resource. What change I should do in the configuration even if I use a backend for terraform state
CodePudding user response:
You're almost there- you have a script, a null resource, and a well defined change that you want to trigger the null resource.
First, build your trigger string. If you expect it to be particularly large, the md5
function can help, but having it be human readable makes it easier to debug:
locals {
script_trigger = join(", ",
[ for projKey, ProjObj in var.MyProj
: "${projKey}(${join(", ",
[for key, value in projObj
: "${key}=${value}"])})"
]
)
}
Then, add that string to your triggers:
resource "null_resource" "nullr" {
...
triggers = { project_object = locals.script_trigger }
}
CodePudding user response:
I want to start here by saying that it seems like what you are doing here might be outside of Terraform's typical scope: usually tools like npm
belong to the "build" step of a build/deploy pipeline, and Terraform isn't really intended or designed to deal with that family of use-cases. Since it's a general programmable tool you can of course make it do things outside of its scope if you are creative, but it usually comes with caveats and limitations.
I'm going to attempt to answer your question as you wrote it but I'd also suggest thinking more philosophically about whether it would be better to solve whatever problem this is intended to solve using a separate process that runs before Terraform in a pipeline, or if perhaps the entire problem would be better solved by some other system that's explicitly designed to support build and deploy processes.
The null_resource
resource type is a special "escape hatch" in Terraform that intentionally does nothing so that you can attach provisioners to it that wouldn't otherwise naturally belong to a resource. The intended purpose of provisioners (although as a last resort) is to implement additional steps that a particular object needs carried out in order to become fully operational, such as writing some needed data into a virtual machine's filesystem.
Normal resource types have inherent rules defined by the remote system about what sorts of changes can be applied in-place and what sorts of changes require re-creating the object. Because null_resource
represents no particular remote object type, it doesn't have any such inherent rules, but it does have the more artificial concept of a triggers
argument that does nothing except force replacing the object whenever its value differs from the previous run.
In order to force your null_resource
to be replaced and therefore to force its associated provisioners to re-run then will require populating the triggers
argument with values that will normally stay unchanged but that should, if changed, cause the provisioner to re-run.
In your case it seems like the values within your var.MyProj
might be a reasonable thing to use as triggers, though because triggers
is a map of strings we'd need to encode it to a string first. JSON encoding could be a reasonable answer in that case:
resource "null_resource" "nullr" {
for_each = var.MyProj
triggers = {
settings = jsonencode(each.value)
}
provisioner "local-exec" {
command = "bash /home/myscript.sh ${each.value.name} ${each.value.progrmme}
}
}
Because the triggers
for each instance of the resource only refers to the current each.value
, adding a new entry to var.MyProj
should not affect any of the existing instances of the resource, and thus not re-run any provisioners. However, if you edit one of the existing keys then it will re-run with the new settings.
If you expect to need to re-run a particular resource's provisioner even though neither its name or "progrmme" have changed then you may wish to add a third attribute to the element type of var.MyProj
which is an integer or some special string identifier that you'll change each time it needs to re-run. You could then change the value of that attribute to force it to re-run even though it'll be re-running the same program as before.
Another situation to consider is what should happen if the content of this myscript.sh
changes. If you want to re-run the script each time it changes then you can add another triggers
entry to capture a checksum of the file, which will therefore change each time the file changes:
resource "null_resource" "nullr" {
for_each = var.MyProj
triggers = {
settings = jsonencode(each.value)
script_checksum = filesha256("/home/myscript.sh")
}
provisioner "local-exec" {
command = "bash /home/myscript.sh ${each.value.name} ${each.value.progrmme}
}
}