Home > Mobile >  Error: Waiting for rollout to finish: 3 replicas wanted; 0 replicas Ready
Error: Waiting for rollout to finish: 3 replicas wanted; 0 replicas Ready

Time:01-18

I've been trying to apply a terraform project but I have been receiving this weird error I cannot find a solution to. the deployment section:

############# the deployment section #############

 resource "kubernetes_deployment" "dep_apps" {
  for_each = var.apps
  metadata {
    name      = each.value.appName
    namespace = each.value.appName
    labels = {
      name = each.value.labels.name
      tier = each.value.labels.tier
    }
  }
 spec {
    replicas = 3
strategy {
               type = "RollingUpdate"

               rolling_update {
                   max_surge       = "25%" 
                   max_unavailable = "25%" 
                }
            }
    selector {
      match_labels = {
        name = each.value.labels.name
        tier = each.value.labels.tier
      }
    }
    template {
  metadata {
    name      = each.value.appName
    namespace = each.value.appName
    labels = {
      name = each.value.labels.name
      tier = each.value.labels.tier
    }
  }

  spec {
    container {
      name  = each.value.appName
      image = each.value.image
      resources {
        limits = {
          cpu    = "500m"
          memory = "512Mi"
        }
        requests = {
          cpu    = "200m"
          memory = "256Mi"
        }
      }
    }
  }
    }
  }
}
############# the replica section #############
resource "kubernetes_horizontal_pod_autoscaler_v1" "autoscaler" {
  for_each = var.apps
  metadata {
    name = "${each.value.appName}-as"
    namespace = each.value.appName
    labels = {
      name = each.value.labels.name
      tier = each.value.labels.tier
    }
  }
  spec {
    scale_target_ref {
      api_version = "apps/v1"
      kind = "ReplicaSet"
      name = each.value.appName
    }
    min_replicas = 1
    max_replicas = 10
  }
}
############# the loadbalancer section #############
resource "kubernetes_service" "load_balancer" {
        for_each = toset([for app in var.apps : app if app.appName == ["app1", "app2"]])
  metadata {
    name = "${each.value.appName}-lb"
    labels = {
      name = each.value.labels.name
      tier = each.value.labels.tier
    }
  }
  spec {
    selector = {
      app = each.value.appName
    }
    port {
      name = "http"
      port = 80
      target_port = 8080
    }
    type = "LoadBalancer"
  }
}

the variables section:

variable "apps" {
  type = map(object({
    appName    = string
    team        = string
    labels      = map(string)
    annotations = map(string)
    data = map(string)
    image       = string

  }))
  default = {
    "app1" = {
      appName = "app1"
      team = "frontend"
      image = "nxinx"

      labels = {
        "name"  = "stream-frontend"
        "tier"  = "web"
        "owner" = "product"
      }

      annotations = {
        "serviceClass"       = "web-frontend"
        "loadBalancer_and_class" = "external"
      }
      data = {
        "aclName" = "acl_frontend"
        "ingress"  = "stream-frontend"
        "egress"   = "0.0.0.0/0"
        "port"     = "8080"
        "protocol" = "TCP"
      }


    }
    "app2" = {
      appName = "app2"
      team = "backend"
      image = "nginx:dev"

      labels = {
        "name"  = "stream-frontend"
        "tier"  = "web"
        "owner" = "product"
      }

      annotations = {
        "serviceClass"       = "web-frontend"
        "loadBalancer_and_class" = "external"
      }
      data = {
        "aclName"   = "acl_backend"
        "ingress"    = "stream-backend"
        "egress"     = "0.0.0.0/0"
        "port"       = "8080"
        "protocol" = "TCP"
    }
    }

    "app3" = {
      appName = "app3"
      team = "database"
      image = "Mongo"
      labels = {
        "name"  = "stream-database"
        "tier"  = "shared"
        "owner" = "product"
      }

      annotations = {
        "serviceClass"       = "disabled"
        "loadBalancer_and_class" = "disabled"
      }
      data = {
        "aclName"   = "acl_database"
        "ingress"  = "stream-database"
        "egress"   = "172.17.0.0/24"
        "port"     = "27017"
        "protocol" = "TCP"
      }
    }
    }
  }

been trying to add mode maybe cpu utilization but didn't work (before I added the error was on all 3 - app1,2,3)

type = "RollingUpdate"
  rolling_update {
  max_surge       = "25%" 
  max_unavailable = "25%" 
 }
}

some errors I've received and kubectl data:

kubectl get deployments ->
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
app1   0/3     3            0           34m

kubectl rollout status deployment app1 ->
error: deployment "app1" exceeded its progress deadline
 
kubectl get namespace ->
NAME              STATUS   AGE  
app1              Active   6h16m
app2              Active   6h16m
app3              Active   6h16m
default           Active   4d15h
kube-node-lease   Active   4d15h
kube-public       Active   4d15h
kube-system       Active   4d15h


kubectl get pods  ->
NAME                    READY   STATUS             RESTARTS   AGE
app1-7f8657489c-579lm   0/1     Pending            0          37m
app1-7f8657489c-jjppq   0/1     ImagePullBackOff   0          37m
app1-7f8657489c-lv49l   0/1     ImagePullBackOff   0          37m

kubectl get pods --namespace=app2
NAME                    READY   STATUS             RESTARTS   AGE
app2-68b6b59584-86dt8   0/1     ImagePullBackOff   0          38m
app2-68b6b59584-8kr2p   0/1     Pending            0          38m
app2-68b6b59584-jzzxt   0/1     Pending            0          38m

kubectl get pods --namespace=app3
NAME                    READY   STATUS             RESTARTS   AGE
app3-5f589dc88d-gwn2n   0/1     InvalidImageName   0          39m
app3-5f589dc88d-pzhzw   0/1     InvalidImageName   0          39m
app3-5f589dc88d-vx452   0/1     InvalidImageName   0          39m

CodePudding user response:

The errors are obvious: ImagePullBackOff and InvalidImageName. Unless you have copy pasted and edited the terraform variable values, then you have a typo in the image name for all three apps:

  • app1: image = "nxinx"
  • app2: image = "nginx:dev"
  • app3: image = "Mongo"

The first one should sort itself out if you fix it from nxinx to nginx. The second one will not work, as there are no image tags dev for the nginx image. Last, but not the least, the Mongo image should be mongo (lowercase). It is important to understand that you cannot use arbitrary image names. If you do not specify a container registry, it will default to using Docker Hub in most cases. Since that is the case, you have to make sure you are using the expected naming convention for the images. Additionally, it is also important to understand how to debug things in Kubernetes. If you had just tried reading the logs of any of the Pods (but you should do it anyway for all of them) you would have probably seen the error yourself. For example:

kubectl get pod app1-7f8657489c-579lm -n app1
  • Related