Home > Net >  How long does it take for Kubernetes to detect and delete excess nodes
How long does it take for Kubernetes to detect and delete excess nodes

Time:06-30

I am running a Kubernetes cluster in AWS EKS and I set up the autoscaler. I tested the autoscaler and it worked as when the number of pods in a node exceeded 110 then new nodes were automatically added to the cluster and the pending pods entered running state.

After that, I deleted the deployment. It's been about 10 minutes and I see that all new nodes created by the autoscaler are already there and in ready state!

How long does it take for Kubernetes to delete them automatically? Does it down-scale the cluster automatically at all?

CodePudding user response:

Although scaling down is a slow process the default scan interval is 10 seconds if you are using the autoscaler to scale the nodes in EKS.

You can check the status of autoscaler using configmap and its a decision.

There could be a possibility that on the new node you have some system pod running so due to that EKS is not able to scale those nodes down or PDB(PodDisruptionBudget) is set for deployments.

Pod has the annotation "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"

Read more about EKS scaling : https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html

  • Related