Home > Enterprise >  Unable to attach or mount volumes: timed out waiting for the condition
Unable to attach or mount volumes: timed out waiting for the condition

Time:10-13

One of the pods in my local cluster can't be started because I get Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition error.

$ kubectl get pods
NAME                                        READY   STATUS              RESTARTS   AGE
deployment-nats-db-5f5f9fd6d5-wrcpk         0/1     ContainerCreating   0          19m
deployment-nats-server-57bbc76d44-tz5zj     1/1     Running             0          19m

$ kubectl describe pods deployment-nats-db-5f5f9fd6d5-wrcpk
Name:           deployment-nats-db-5f5f9fd6d5-wrcpk
Namespace:      default
Priority:       0
Node:           docker-desktop/192.168.65.4
Start Time:     Tue, 12 Oct 2021 21:42:23  0600
Labels:         app=nats-db
                pod-template-hash=5f5f9fd6d5
                skaffold.dev/run-id=1f5421ae-6e0a-44d6-aa09-706a1d1aa011
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/deployment-nats-db-5f5f9fd6d5
Containers:
  nats-db:
    Container ID:
    Image:          postgres:latest
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  256Mi
    Requests:
      cpu:     250m
      memory:  128Mi
    Environment Variables from:
      nats-db-secrets  Secret  Optional: false
    Environment:       <none>
    Mounts:
      /docker-entrypoint-initdb.d from nats-initdb-volume (rw)
      /var/lib/postgresql/data from nats-data-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b5cz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nats-data-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nats-pvc
    ReadOnly:   false
  nats-initdb-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nats-pvc
    ReadOnly:   false
  kube-api-access-5b5cz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    19m                 default-scheduler  Successfully assigned default/deployment-nats-db-5f5f9fd6d5-wrcpk to docker-desktop
  Warning  FailedMount  4m9s (x2 over 17m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-initdb-volume kube-api-access-5b5cz nats-data-volume]: timed out waiting for the condition
  Warning  FailedMount  112s (x6 over 15m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition

I don't know where the issue is. The PVs and PVCs are all seemed to be successfully applied.

$ kubectl get pv,pvc
NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS             REASON   AGE
persistentvolume/nats-pv        50Mi       RWO            Retain           Bound    default/nats-pvc        local-hostpath-storage            21m

NAME                                  STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS             AGE
persistentvolumeclaim/nats-pvc        Bound    nats-pv        50Mi       RWO            local-hostpath-storage   21m

Following are the configs for SC, PV and PVC:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-hostpath-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nats-pv
spec:
  capacity:
    storage: 50Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: local-hostpath-storage
  hostPath:
    path: /mnt/wsl/nats-pv
    type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nats-pvc
spec:
  volumeName: nats-pv
  resources:
    requests:
      storage: 50Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: local-hostpath-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-nats-db
spec:
  selector:
    matchLabels:
      app: nats-db
  template:
    metadata:
      labels:
        app: nats-db
    spec:
      containers:
        - name: nats-db
          image: postgres:latest
          envFrom:
            - secretRef:
                name: nats-db-secrets
          volumeMounts:
            - name: nats-data-volume
              mountPath: /var/lib/postgresql/data
            - name: nats-initdb-volume
              mountPath: /docker-entrypoint-initdb.d
          resources:
            requests:
              cpu: 250m
              memory: 128Mi
            limits:
              cpu: 1000m
              memory: 256Mi
      volumes:
        - name: nats-data-volume
          persistentVolumeClaim:
            claimName: nats-pvc
        - name: nats-initdb-volume
          persistentVolumeClaim:
            claimName: nats-pvc

This pod will be started successfully if I comment out volumeMounts and volumes keys. And it's specifically with this /var/lib/postgresql/data path. Like if I remove nats-data-volume and keep nats-initdb-volume, it's started successfully.

Can anyone help me where I'm wrong exactly? Thanks in advance and best regards.

CodePudding user response:

...if I remove nats-data-volume and keep nats-initdb-volume, it's started successfully.

This PVC cannot be mounted twice, that's where the condition cannot be met.

Looking at your spec, it seems you don't mind which worker node will run your postgress pod. In this case you don't need PV/PVC, you can mount hostPath directly like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-nats-db
spec:
  selector:
    matchLabels:
      app: nats-db
  template:
    metadata:
      labels:
        app: nats-db
    spec:
      containers:
        - name: nats-db
          image: postgres:latest
          envFrom:
            - secretRef:
                name: nats-db-secrets
          volumeMounts:
            - name: nats-data-volume
              mountPath: /var/lib/postgresql/data
            - name: nats-data-volume
              mountPath: /docker-entrypoint-initdb.d
          resources:
            requests:
              cpu: 250m
              memory: 128Mi
            limits:
              cpu: 1000m
              memory: 256Mi
      volumes:
        - name: nats-data-volume
          hostPath:
            path: /mnt/wsl/nats-pv
            type: DirectoryOrCreate
  • Related