Home > OS >  " Pod is blocking scale down because it has local storage "
" Pod is blocking scale down because it has local storage "

Time:02-11

I have kubernets cluster in gcp with docker container runtime. I am trying to change docker container runtime into containerd. Following steps shows what I did.

  1. New node pool added ( nodes with containerd )
  2. drained old nodes

Once I perform above steps I am getting " Pod is blocking scale down because it has local storage " warning message.

CodePudding user response:

You need to add the once annotation to POD so that cluster autoscaler can remove that POD from POD safe to evict.

cluster-autoscaler.kubernetes.io/safe-to-evict": "true"

above annotation, you have to add in into POD.

You can read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#cluster-not-scalingdown

NoScaleDown example: You found a noScaleDown event that contains a per-node reason for your node. The message ID is "no.scale.down.node.pod.has.local.storage" and there is a single parameter: "test-single-pod". After consulting the list of error messages, you discover this means that the "Pod is blocking scale down because it requests local storage". You consult the Kubernetes Cluster Autoscaler FAQ and find out that the solution is to add a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation to the Pod. After applying the annotation, cluster autoscaler scales down the cluster correctly.

CodePudding user response:

Types of pods that prevent Cluster Autoscaler from removing a node:

  • Pods with restrictive PodDisruptionBudget.

  • Kube-system pods that:

    are not run on the node by default
    
    don't have a pod disruption budget set or their PDB is too restrictive (since CA 0.6).
    
  • Pods that are not backed by a controller object (so not created by deployment, replica set, job, stateful set etc). *

  • Pods with local storage. *

  • Pods that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity, matching anti-affinity, etc)

  • Pods that have the following annotation set:

"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"

So, your Pod is blocking scale down because it requests local storage. You should add the annotation(supported in CA 1.0.3 or later) to the pod:

“cluster-autoscaler.kubernetes.io/safe-to-evict": "true”

For more information refer to the Cluster Autoscaler FAQs.

You can also refer to the following stack overflow case which is similar to your case.

  • Related