I have a cronjob that cleans some job since my kubernetes is on older version so cannot use ttlafterfinished
. How can I fetch the namespaces that have this job deployed and pass the namespaces name dynamically instead of repeating the same command several times?
This is my cronjob:
kind: CronJob
metadata:
name: jobs-cleanup
spec:
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
serviceAccountName: cleanup-operator
containers:
- name: jobs-cleanup
image: google.io/test/job:1.0
args:
- -c
- |
for i in $(seq 9);do
kubectl get jobs --sort-by=.metadata.creationTimestamp -n dev$i | grep "job-init" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n dev$i;done
kubectl get jobs --sort-by=.metadata.creationTimestamp -n stg1 | grep "fusionauth-job" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n stg1
kubectl get jobs --sort-by=.metadata.creationTimestamp -n pt1 | grep "fusionauth-job" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n pt1
kubectl get jobs --sort-by=.metadata.creationTimestamp -n sit1 | grep "fusionauth-job" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n sit1
command:
- /bin/sh
restartPolicy: Never
imagePullSecrets:
- name: secret
CodePudding user response:
You can use the Downward API and propagate the namespace to an env var. This env var can then be used in your job/pod.
kind: CronJob
metadata:
name: jobs-cleanup
spec:
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
serviceAccountName: cleanup-operator
containers:
- name: jobs-cleanup
image: google.io/test/job:1.0
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- -c
- |
for i in $(seq 9);do
kubectl get jobs --sort-by=.metadata.creationTimestamp -n dev$i | grep "job-init" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n dev$i;done
kubectl get jobs --sort-by=.metadata.creationTimestamp -n stg1 | grep "fusionauth-job" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n stg1
kubectl get jobs --sort-by=.metadata.creationTimestamp -n pt1 | grep "fusionauth-job" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n pt1
kubectl get jobs --sort-by=.metadata.creationTimestamp -n sit1 | grep "fusionauth-job" | cut -d' ' -f1 | tac | tail -n 2 | xargs -I % kubectl delete jobs % -n sit1
command:
- /bin/sh
restartPolicy: Never
imagePullSecrets:
- name: secret
If your cleanup job is running in another namespace, you can also use the --all-namespaces
(shorthand -A
) param so get all resources/jobs from all namespaces and play a bit around with custom columns and awk: So in your case:
kubectl get po -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace -A | grep "fusionauth-job" | awk '{print "kubectl delete job "$1" -n "$2}' | sh
CodePudding user response:
A more flexible alternative to what you're doing now would be to store the list of job names you want to delete in a list/array. Then, you can iterate through the list of job names and execute the command.
Below is a simpler (IMO) version of your command that uses the -o=jsonpath
support instead in kubectl
to specify a search criteria.
# The list of jobs you want to delete from any/all namespaces.
jobs_list="job-init fusionauth-job"
for job_name in ${jobs_list}; do
kubectl get jobs -A --sort-by=.metadata.creationTimestamp \
-o=jsonpath="{range .items[?(@.metadata.name == '${job_name}')]}{.metadata.namespace} {.metadata.name}{'\n'}{end}" \
| while read namespace job;do kubectl delete job ${job} -n ${namespace};done;
done