Home > database >  Terraform kubectl provider error: failed to created kubernetes rest client for read of resource
Terraform kubectl provider error: failed to created kubernetes rest client for read of resource

Time:08-15

I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the kubectl provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers. This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the kubectl_manifest resources:

Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp 127.0.0.1:80: connect: connection refused

Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!

CodePudding user response:

The provider trying to connect with localhost, which means either to you need to provide a proper kube-config file or set it dynamically in the terraform.

Although you didn't mention how are setting the auth, but here is two way

Poor way

resource "null_resource" "deploy-app" {
  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    command     = <<EOT
    kubectl apply -f myapp.yaml ./temp/kube-config.yaml;
    EOT
  }
 # will run always, its bad
  triggers = {
    always_run = "${timestamp()}"
  }
  depends_on = [
    local_file.kube_config
  ]
}


resource "local_file" "kube_config" {
  content  = var.my_kube_config # pass the config file from ci variable
  filename = "${path.module}/temp/kube-config.yaml"
}

Proper way

data "google_container_cluster" "cluster" {
  name = "your_cluster_name"
}
data "google_client_config" "current" {
}
  provider "kubernetes" {
    host  = data.google_container_cluster.cluster.endpoint
    token = data.google_client_config.current.access_token
    cluster_ca_certificate = base64decode(
      data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
    )
  }

data "kubectl_file_documents" "app_yaml" {
  content = file("myapp.yaml")
}

resource "kubectl_manifest" "app_installer" {
  for_each  = data.kubectl_file_documents.app_yaml.manifests
  yaml_body = each.value
}

If the cluster in the same module , then provider should be

provider "kubernetes" {
  load_config_file = "false"
  host     = google_container_cluster.my_cluster.endpoint
  client_certificate     = google_container_cluster.my_cluster.master_auth.0.client_certificate
  client_key             = google_container_cluster.my_cluster.master_auth.0.client_key
  cluster_ca_certificate = google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate
}

CodePudding user response:

Fixed the issue by adding load_config_file = false to the kubectl provider config. My provider config now looks like this:

data "google_client_config" "default" {}

provider "kubernetes" {
  host                   = "https://${endpoint from GKE}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(CA certificate from GKE)
}

provider "kubectl" {
  host                   = "https://${endpoint from GKE}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(CA certificate from GKE)
  load_config_file       = false
}
  • Related