Home > Back-end >  Kubernetes pods are stuck after scale up AWS. Multi-Attach error for volume
Kubernetes pods are stuck after scale up AWS. Multi-Attach error for volume

Time:03-10

I experiencing some issues when scale down/up ec2 of my k8s cluster. It might happen that sometimes I have new nodes, and old are terminated. k8s version is 1.22

Sometimes some pods are in ContainerCreating state. I am trying to describe pod and see something like this:

Warning FailedAttachVolume 29m attachdetach-controller Multi-Attach error for volume
Warning FailedMount 33s (x13 over 27m) kubelet....

I am checking that pv exists, pvs exists as well. However on pvc I see annotation volume.kubernetes.io/selected-node and its value refers to the node that already not exist.

When I am editing the pvc and deleting this annotation, everything continue to work. Another thing that It happens not always, I don't understand why.

I tried to search information, found some couple of links

https://github.com/kubernetes/kubernetes/issues/100485 and https://github.com/kubernetes/kubernetes/issues/89953 however I am not sure that I properly understand this.

Could you please helm me out with this.

CodePudding user response:

Well, as you found out in volume.kubernetes.io/selected-node never cleared for non-existent nodes on PVC without PVs #100485 - this is a known issue, with no available fix yet.

Until the issue is fixed, as a workaroud, you need to remove volume.kubernetes.io/selected-node annotation manually.

  • Related