Home > OS >  Kubernetes registry pod is not deployed
Kubernetes registry pod is not deployed

Time:09-05

I want to create a pod which will be used as a docker registry. I tried this configuration:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 256Mi # specify your own size
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  local:
    path: /opt/registry # can be any path
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kubernetes1
  accessModes:
    - ReadWriteOnce # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv1-claim
spec: # should match specs added in the PersistenVolume
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 256Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: private-repository-k8s
  labels:
    app: private-repository-k8s
spec:
  replicas: 1
  selector:
    matchLabels:
      app: private-repository-k8s
  template:
    metadata:
      labels:
        app: private-repository-k8s
    spec:
      volumes:
       - name: task-pv-storage
         persistentVolumeClaim:
           claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
      containers:
        - image: registry:2
          name: private-repository-k8s
          imagePullPolicy: IfNotPresent
          env:
          - name: REGISTRY_HTTP_TLS_CERTIFICATE
            value: "/certs/registry.crt"
          - name: REGISTRY_HTTP_TLS_KEY
            value: "/certs/registry.key"
          ports:
            - containerPort: 5000
          volumeMounts:
          - name: task-pv-storage
            mountPath: /opt/registry

deploy:

kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
persistentvolumeclaim/pv1-claim created
deployment.apps/private-repository-k8s created
kubernetes@kubernetes1:/opt/registry$

But it's stucked in pending:

kubernetes@kubernetes1:/opt/registry$ kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
private-repository-k8s-6cc59cbcb-p8wn6   0/1     Pending   0          10m

I get this log:

kubernetes@kubernetes1:/opt/registry$ kubectl describe po private-repository-k8s
Name:             private-repository-k8s-6cc59cbcb-p8wn6
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=private-repository-k8s
                  pod-template-hash=6cc59cbcb
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/private-repository-k8s-6cc59cbcb
Containers:
  private-repository-k8s:
    Image:      registry:2
    Port:       5000/TCP
    Host Port:  0/TCP
    Environment:
      REGISTRY_HTTP_TLS_CERTIFICATE:  /certs/registry.crt
      REGISTRY_HTTP_TLS_KEY:          /certs/registry.key
    Mounts:
      /opt/registry from task-pv-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5chp4 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  task-pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv1-claim
    ReadOnly:   false
  kube-api-access-5chp4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  11m   default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  11m   default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

kubernetes@kubernetes1:/opt/registry$

Do you know what might be the issue and how to fix it?

EDIT:

kubernetes@kubernetes1:~$ kubectl taint nodes --all node-role.kubernetes.io/master-
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
kubernetes@kubernetes1:~$

EDIT 2:

kubernetes@kubernetes1:~$ kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
kubernetes1   Ready    control-plane   3d11h   v1.25.0
kubernetes2   Ready    <none>          3d11h   v1.25.0
kubernetes@kubernetes1:~$ kubectl describe node kubernetes1
Name:               kubernetes1
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kubernetes1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.1.126/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 172.16.230.0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 31 Aug 2022 23:19:51  0000
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  kubernetes1
  AcquireTime:     <unset>
  RenewTime:       Sun, 04 Sep 2022 10:55:43  0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sun, 04 Sep 2022 10:47:33  0000   Sun, 04 Sep 2022 10:47:33  0000   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:19:51  0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:19:51  0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:19:51  0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:31:22  0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.1.126
  Hostname:    kubernetes1
Capacity:
  cpu:                2
  ephemeral-storage:  19430032Ki
  hugepages-2Mi:      0
  memory:             4018212Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  17906717462
  hugepages-2Mi:      0
  memory:             3915812Ki
  pods:               110
System Info:
  Machine ID:                 afe0726ff4054fc9af9dde6c42fd6879
  System UUID:                de433702-5bac-3843-9151-39a631ae0ea5
  Boot ID:                    edb1be5a-b382-48e8-b5bf-9a5096acc086
  Kernel Version:             5.15.0-46-generic
  OS Image:                   Ubuntu 22.04.1 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.8
  Kubelet Version:            v1.25.0
  Kube-Proxy Version:         v1.25.0
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                   ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-node-q4lxz                      250m (12%)    0 (0%)      0 (0%)           0 (0%)         3d11h
  kube-system                 etcd-kubernetes1                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3d11h
  kube-system                 kube-apiserver-kubernetes1             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3d11h
  kube-system                 kube-controller-manager-kubernetes1    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3d11h
  kube-system                 kube-proxy-97djs                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d11h
  kube-system                 kube-scheduler-kubernetes1             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3d11h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                900m (45%)  0 (0%)
  memory             100Mi (2%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                    From             Message
  ----     ------                   ----                   ----             -------
  Normal   Starting                 3d11h                  kube-proxy
  Normal   Starting                 8m26s                  kube-proxy
  Normal   Starting                 3d11h                  kube-proxy
  Normal   Starting                 3d11h                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      3d11h                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  3d11h                  kubelet          Node kubernetes1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    3d11h                  kubelet          Node kubernetes1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     3d11h                  kubelet          Node kubernetes1 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  3d11h                  kubelet          Updated Node Allocatable limit across pods
  Normal   RegisteredNode           3d11h                  node-controller  Node kubernetes1 event: Registered Node kubernetes1 in Controller
  Normal   NodeAllocatableEnforced  3d11h                  kubelet          Updated Node Allocatable limit across pods
  Normal   Starting                 3d11h                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      3d11h                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  3d11h (x8 over 3d11h)  kubelet          Node kubernetes1 status is now: NodeHasSufficientMemory
  Normal   NodeHasSufficientPID     3d11h (x7 over 3d11h)  kubelet          Node kubernetes1 status is now: NodeHasSufficientPID
  Normal   NodeHasNoDiskPressure    3d11h (x7 over 3d11h)  kubelet          Node kubernetes1 status is now: NodeHasNoDiskPressure
  Normal   RegisteredNode           3d11h                  node-controller  Node kubernetes1 event: Registered Node kubernetes1 in Controller
  Normal   Starting                 8m36s                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      8m36s                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  8m36s (x8 over 8m36s)  kubelet          Node kubernetes1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    8m36s (x7 over 8m36s)  kubelet          Node kubernetes1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     8m36s (x7 over 8m36s)  kubelet          Node kubernetes1 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
  Normal   RegisteredNode           8m9s                   node-controller  Node kubernetes1 event: Registered Node kubernetes1 in Controller
kubernetes@kubernetes1:~$
kubernetes@kubernetes1:~$
kubernetes@kubernetes1:~$
kubernetes@kubernetes1:~$ kubectl describe node kubernetes2
Name:               kubernetes2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kubernetes2
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.1.138/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 172.16.249.128
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 31 Aug 2022 23:30:00  0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  kubernetes2
  AcquireTime:     <unset>
  RenewTime:       Sun, 04 Sep 2022 10:55:52  0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sun, 04 Sep 2022 10:47:33  0000   Sun, 04 Sep 2022 10:47:33  0000   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:30:00  0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:30:00  0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:30:00  0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sun, 04 Sep 2022 10:52:27  0000   Wed, 31 Aug 2022 23:31:22  0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.1.138
  Hostname:    kubernetes2
Capacity:
  cpu:                2
  ephemeral-storage:  19430032Ki
  hugepages-2Mi:      0
  memory:             4018212Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  17906717462
  hugepages-2Mi:      0
  memory:             3915812Ki
  pods:               110
System Info:
  Machine ID:                 afe0726ff4054fc9af9dde6c42fd6879
  System UUID:                02c9bf04-27ce-5f4d-8a2a-03b8d58572bf
  Boot ID:                    3c7604e9-a0ad-4aff-89b5-d525b508ff0e
  Kernel Version:             5.15.0-46-generic
  OS Image:                   Ubuntu 22.04.1 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.8
  Kubelet Version:            v1.25.0
  Kube-Proxy Version:         v1.25.0
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-kube-controllers-58dbc876ff-dgs77    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d11h
  kube-system                 calico-node-czmzc                           250m (12%)    0 (0%)      0 (0%)           0 (0%)         3d11h
  kube-system                 coredns-565d847f94-k94z2                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3d11h
  kube-system                 coredns-565d847f94-nt27m                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3d11h
  kube-system                 kube-proxy-d8bzs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d11h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                450m (22%)  0 (0%)
  memory             140Mi (3%)  340Mi (8%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                    From             Message
  ----     ------                   ----                   ----             -------
  Normal   Starting                 3d11h                  kube-proxy
  Normal   Starting                 8m36s                  kube-proxy
  Normal   NodeHasSufficientMemory  3d11h (x8 over 3d11h)  kubelet          Node kubernetes2 status is now: NodeHasSufficientMemory
  Normal   RegisteredNode           3d11h                  node-controller  Node kubernetes2 event: Registered Node kubernetes2 in Controller
  Normal   Starting                 8m58s                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      8m58s                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  8m58s                  kubelet          Updated Node Allocatable limit across pods
  Normal   NodeHasNoDiskPressure    8m52s (x7 over 8m58s)  kubelet          Node kubernetes2 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     8m52s (x7 over 8m58s)  kubelet          Node kubernetes2 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  8m46s (x8 over 8m58s)  kubelet          Node kubernetes2 status is now: NodeHasSufficientMemory
  Normal   RegisteredNode           8m21s                  node-controller  Node kubernetes2 event: Registered Node kubernetes2 in Controller
kubernetes@kubernetes1:~$

EDIT 3:

kubernetes@kubernetes1:~$ kubectl taint nodes kubernetes1  node-role.kubernetes.io/control-plane:NoSchedule-
node/kubernetes1 untainted
kubernetes@kubernetes1:~$

EDIT 4:

kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
persistentvolumeclaim/pv1-claim created
deployment.apps/private-repository-k8s created

kubernetes@kubernetes1:/opt/registry$ kubectl get pods
NAME                                     READY   STATUS             RESTARTS     AGE
private-repository-k8s-6cc59cbcb-v289g   0/1     CrashLoopBackOff   1 (4s ago)   12s

kubernetes@kubernetes1:/opt/registry$ kubectl describe po private-repository-k8s-6cc59cbcb-v289g
Name:             private-repository-k8s-6cc59cbcb-v289g
Namespace:        default
Priority:         0
Service Account:  default
Node:             kubernetes1/192.168.1.126
Start Time:       Sun, 04 Sep 2022 11:08:12  0000
Labels:           app=private-repository-k8s
                  pod-template-hash=6cc59cbcb
Annotations:      cni.projectcalico.org/containerID: 094bc57e3f966c9bb639f175adc319061ef0922d0d78528fe5ce25cf966f4e2b
                  cni.projectcalico.org/podIP: 172.16.230.1/32
                  cni.projectcalico.org/podIPs: 172.16.230.1/32
Status:           Running
IP:               172.16.230.1
IPs:
  IP:           172.16.230.1
Controlled By:  ReplicaSet/private-repository-k8s-6cc59cbcb
Containers:
  private-repository-k8s:
    Container ID:   containerd://307604dd9792dc7b3a88b27bbd508215bca07ed17e80077b72bd8679975cbba5
    Image:          registry:2
    Image ID:       docker.io/library/registry@sha256:83bb78d7b28f1ac99c68133af32c93e9a1c149bcd3cb6e683a3ee56e312f1c96
    Port:           5000/TCP
    Host Port:      0/TCP
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 04 Sep 2022 11:09:50  0000
      Finished:     Sun, 04 Sep 2022 11:09:50  0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 04 Sep 2022 11:09:02  0000
      Finished:     Sun, 04 Sep 2022 11:09:02  0000
    Ready:          False
    Restart Count:  4
    Environment:
      REGISTRY_HTTP_TLS_CERTIFICATE:  /certs/registry.crt
      REGISTRY_HTTP_TLS_KEY:          /certs/registry.key
    Mounts:
      /opt/registry from task-pv-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsx8j (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  task-pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv1-claim
    ReadOnly:   false
  kube-api-access-dsx8j:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  102s              default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Normal   Scheduled         100s              default-scheduler  Successfully assigned default/private-repository-k8s-6cc59cbcb-v289g to kubernetes1
  Normal   Pulling           101s              kubelet            Pulling image "registry:2"
  Normal   Pulled            96s               kubelet            Successfully pulled image "registry:2" in 5.005590173s
  Normal   Created           3s (x5 over 96s)  kubelet            Created container private-repository-k8s
  Normal   Started           3s (x5 over 95s)  kubelet            Started container private-repository-k8s
  Normal   Pulled            3s (x4 over 95s)  kubelet            Container image "registry:2" already present on machine
  Warning  BackOff           2s (x9 over 94s)  kubelet            Back-off restarting failed container

CodePudding user response:

It seems like your node has taints hence pods are not getting scheduled. Can you try using this command to remove taints from your node ?

kubectl taint nodes  <node-name> node-role.kubernetes.io/master-

or

kubectl taint nodes --all node-role.kubernetes.io/master-

To get the node name use kubectl get nodes

User was able to get the pod scheduled after running below command:

kubectl taint nodes kubernetes1 node-role.kubernetes.io/control-plane:NoSchedule-

Now pod is failing due to crashloopbackoff this implies the pod has been scheduled.

Can you please check if this pod is getting scheduled and running properly ?

apiVersion: v1
kind: Pod
metadata:
  name: nginx1
  namespace: test
spec:
  containers:
  - name: webserver
    image: nginx:alpine
    ports:
    - containerPort: 80
    resources:
      requests:
        memory: "64Mi"
        cpu: "200m"
      limits:
        memory: "128Mi"
        cpu: "350m"
  • Related