Home > Back-end >  Application pod status remain in pending
Application pod status remain in pending

Time:01-20

I have deployment an application, But pod always in pending state.

$ kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
server1       Ready    control-plane   8d    v1.24.9
server2       Ready    worker1         8d    v1.24.9
server3       Ready    worker2         8d    v1.24.9
server4       Ready    worker3         8d    v1.24.9
$ kubectl get all -n jenkins
NAME                          READY   STATUS    RESTARTS   AGE
pod/jenkins-6dc9f97c7-ttp64   0/1     Pending   0          7m42s
$ kubectl describe pods jenkins-6dc9f97c7-ttp64 -n jenkins

Events:
Type     Reason            Age    From               Message
----     ------            ----   ----               -------
Warning  FailedScheduling  5m42s  default-scheduler  0/4 nodes are available: 3 node(s) had volume node affinity conflict, 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

The event history confirms a FailedScheduling error is the reason.

my deployment.yml has forced to assign the pod into master node.

 spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists

Since from Kubernetes version 1.20 onwards node-role.kubernetes.io/master is deprecated in favor of node-role.kubernetes.io/control-plane i have updated like below. However still pod showing as pending.

    spec:
      nodeSelector:
        node-role.kubernetes.io/control-plane: ""
      tolerations:
      - key: node-role.kubernetes.io/control-plane

PersistentVolume.yml side i have below content.

...
.....
..........
  local:
    path: /ksdata/apps/nodejs/
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - server1

Label details:-

$ kubectl get nodes --show-labels
NAME      STATUS   ROLES           AGE   VERSION   LABELS
server1   Ready    control-plane   9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
server2   Ready    worker1         9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=worker
server3   Ready    worker2         9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=worker
server4   Ready    worker3         9d    v1.24.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux,node-role.kubernetes.io/worker3=worker
$ kubectl describe node | egrep -i taint
Taints:             key=value:NoSchedule
Taints:             <none>
Taints:             <none>
Taints:             <none>

CodePudding user response:

You have 4 nodes in the cluster, so generally one is master node, and on top of that app pods not schedule so 3 nodes are left.

While with the worker node, your deployment has of node affinity set so due to that it can't schedule a pod on that node and it's stuck in a pending state.

Check the PVC mostly it won't be able to get created

Update

Remove taint form master or control-plane

kubectl taint node server1 key=value:NoSchedule-

Toleration to set on master

  spec:
    nodeSelector:
        kubernetes.io/hostname: "server1"

#if have tain and not removed try toleration else fine with nodeselector

tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
  • Related