Home > Back-end >  Terraform kubectl provider manifest to specific namespace
Terraform kubectl provider manifest to specific namespace

Time:07-29

I am learning how to translate kubectl deployments to terraform. I am currently facing issues getting services to work as intended with terraform provider kubectl once I specify a namespace.

I have confirmed the terraform script works when doing the equivalent kubectl apply to the default namespace.

What is the proper methodology in terraform using the kubectl provider to apply -n namespace?

The two different approaches I have tried are:

resource "kubectl_manifest" "example" {
        override_namespace = kubernetes_namespace_v1.namespace.metadata[0].name
        yaml_body = file("${path.cwd},deploy.yaml")
    }

And also:

resource "kubectl_manifest" "example" {
        yaml_body = file("${path.cwd},deploy.yaml")
    }

with adding the namespace to the deploy.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example
  namespace: namespace
...
---
apiVersion: v1
kind: Service
metadata:
  name: example
  namespace: namespace

Then when I try to confirm that the service is functioning as intended via:

kubectl logs example-6878fd468-9vgkm

Error from server (NotFound): pods "example-6878fd468-9vgkm" not found

CodePudding user response:

Your terraform description is fine. You need to specify the -n switch to your kubectl logs command as well, to detect the pod in the right namespace.

kubectl logs example-6878fd468-9vgkm -nexample
  • Related