Home > Back-end >  Is there a way to wait for all pod from a k8s deployment to be stopped?
Is there a way to wait for all pod from a k8s deployment to be stopped?

Time:02-17

When deleting a pod manually kubectl delete wait for the pod to be deleted and one can include a kubectl wait --for.... condition in a script to wait for the pod to be deleted.

I would like to perform the same wait condition but when scaling down (replicas: 0) a deployment.

From the deployment json, the available/unavailable replicas doesn't count the "terminating" pods and as expected kubectl wait --for=delete deployment test-dep doesn't wait for pod termination but for deployment deletion.

So I would like to perform on my script:

kubectl scale --replicas=0 deployment foo-bar
kubectl wait --for=deletion-of-pod-from-deploymenent=foo-bar

Is there a way to do that?

Remarks: I would like to have the code as generic as possible so no hard writing the labels from deployment.

CodePudding user response:

The easiest way would be to use labels and issue kubectl wait based on that.

kubectl wait --for delete pod --selector=<label>=<value>

but, since you don't want that, you can use the script below

#!/bin/bash

deployment_uid=$(kubectl get deployments ${1} -o=jsonpath='{.metadata.uid}')
rs_uids=$(kubectl get rs -o=json | jq -r '.items[] | select(.metadata.ownerReferences[].uid=='\"${deployment_uid}\"') | .metadata.uid')

PODS=""

for i in $rs_uids; do
    PODS =$(kubectl get pods -o=json | jq -r '.items[] | select(.metadata.ownerReferences[].uid=='\"$i\"') | "pod/"   .metadata.name')
done

[ -z "$PODS" ] && echo "Pods not found" || kubectl wait --for=delete --timeout=-1s ${PODS}

This uses Deployment name (as a first argument), to get ownerRefereces UIDs, chaining down to Pods names.

It's much more complicated, and prone to failure, than just using labels.

  • Related