I am trying to connect terraform to a Kubernetes cluster, but the documentation on Terraform is not clear on which client certificates I should use for TLS connection. Since I am new to both Kubernetes and Terraform, I could not figure that out:
provider "kubernetes" {
host = "https://xxx.xxx.xxx.xxx"
client_certificate = "${file("~/.kube/client-cert.pem")}"
client_key = "${file("~/.kube/client-key.pem")}"
cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}
in the /etc/kubernetes/pki there is more than one certificate and key ( front-proxy-client, api-server-client, api-server-kubelet-client ), which one should I use to Allow terraform to connect to my cluster ?
Edit: Here is the kubernetes version ( output of kubectl version )
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
CodePudding user response:
Do you have kubectl configured in the client-side where you run terraform? In that case, you can use the same configs you used for kubectl. Like this
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}
resource "kubernetes_namespace" "example" {
metadata {
name = "my-first-namespace"
}
}
more details - https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#statically-defined-credentials
If not configuring kubectl is quite easy. You can refer to the docs of your flavor/version of kubernetes. You may also use the same kubeconfig file from an existing kubectl enabled client. Make sure to handle the keys securely.
A few different methods are mentioned here - https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case#configure-the-provider
If you are using a cloud provider flavor like EKS, gcloud you explore the cloud-specific plugins too.
CodePudding user response:
I found out the reason. It was not connected to Terraform. The problem is when I setup my cluster I used the option --apiserver-advertise-address=<MASTER_NODE_PRIVATE_IP> in the kubeadm init command, but when I used --control-plane-endpoint=<MASTER_NODE_PUBLIC_IP>, it worked.