I have a deployment (called moon2
) having 2 replicas (let's say moon2-pod1
and moon2-pod2
), deployed on Azure Kubeneretes (AKS) where the autocsaling feature is enabled (the min=2, the max=10 for nodes)
And when the cluster scales down, sometimes the workers deploying the pods of the deployment in question get killed, then the pods in question get deployed on other workers.
My question: How can i avoid the killing of pods moon2-pod1
and moon2-pod2
? ie. can i tell AKS: when you scale down, do not delete worker(s) having the 2 pods in question ? If the response is yes, how can i do that ? or is there another way ?
Thank you in advance for your help!
CodePudding user response:
I think Pod Disruption Budget (PDB) is probably what you're looking for. You can set a PDB for pods in your demployment with MaxUnavailable: 0
, which means "do not touch any pods for this deployment".
Please check this doc for more details and how to set it: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
one thing to note is that, with AKS, with a PDB like that, you will need to remove such PDB temporarily before you do version upgrade/node-image update etc, otherwise these operations get stuck, because these nodes cannot be drained for upgrade/update with such pods on it.