I created a job,and rerun it several times。 When I delete this job. Only the latest pod be deleted.
How Can I delete these pods all.
CodePudding user response:
For Cronjob
You can use the successfulJobsHistoryLimit to manage the pod count, if you will set it to 0, POD will get removed as soon as it complete it's execution successfully.
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
Read more at : https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits
GCP ref : https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs#history-limit
For Job
If you are using job not cronjob you can use ttlSecondsAfterFinished
- delete the job pod after set second automatically, you can set it accordingly keeping some buffer.
ttlSecondsAfterFinished: 100
will solve your issue.
Example : https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically
Extra :
You can delete those pods using simple command but that is one-time solution using the label in PODs or used in job
kubectl delete pods -l <labels> -n <namespace>
CodePudding user response:
You can create a label or you maybe have already one to match the targeted group of pods , so you can delete them all based on this label as follow:
kubectl delete pods -l app=my-app
CodePudding user response:
I assume you have a number of pods from the same image, and you want to clean them up, and then have only one pod running? If so, you need to delete the deploy:
kubectl -n <namespace> get deploy
kubectl -n <namespace> delete deploy <deployname>
Or you can scale to 0 replicas:
kubectl scale deploy <deploy-name> --replicas=0
which will kill all these pods, and then apply the manifest anew, so it creates 1 pod (assuming you are not scaling to more than 1 active pod)
kubectl -n <namespace> apply -f <manifest-for-that-deploy.yaml>