My end goal is to create a 3 node kubeadm kubernetes cluster with terraform and/or ansible.
As of now I am provisioning three identical instances with terraform.
Then with remote-exec
and inline
installing packages that all instances share between themselves.
Now I want to install specific packages only on one of those three instances. Trying to achieve this using local-exec
.
I am struggling with connecting only to 1 instance with local-exec. I know how to connect to all of them and execute playbook against three instances. But the end goal is to connect to one instance only.
the code snipped:
resource "aws_instance" "r100c96" {
count = 3
ami = "ami-0b9064170e32bde34"
instance_type = "t2.micro"
key_name = local.key_name
tags = {
Name = "terra-ans${count.index}"
}
provisioner "remote-exec" {
connection {
host = "${self.public_ip}"
type = "ssh"
user = local.ssh_user
private_key = file(local.private_key_path)
}
inline = ["sudo hostnamectl set-hostname test"]
}
provisioner "local-exec" {
command = "ansible-playbook -i ${element((aws_instance.r100c96.*.public_ip),0)}, --private-key ${local.private_key_path} helm.yaml"
}
...
}
Thanks,
CodePudding user response:
I think instead of *
use the count.index
, on every loop
run it will pass the specific VM IP.
Also, there are multiple ways to provision a VM using Ansible. Consider if you can dynamically build your Hosts file and provision them in parallel instead one at a time.
CodePudding user response:
You can use null_resource, and and run your remote-exec
for selected instance only, once all three instances in aws_instance.r100c96
are provisioned.