Home > Software engineering >  k8s CronJob loop on list of pods
k8s CronJob loop on list of pods

Time:02-17

I want to run a loop on the pods in specific namespace, however the trick is to do it in a cronJob,is it possible inline?

kubectl get pods -n foo

The trick here is after you get the list of the pods, I need to loop on then and delete each one by one with timeout of 15 seconde, is it possible to do it in cronJob?

apiVersion: batch/v1
kind: CronJob
metadata:
  name: restart
  namespace: foo
spec:
  concurrencyPolicy: Forbid
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      backoffLimit: 2
      activeDeadlineSeconds: 600
      template:
        spec:
          serviceAccountName: pod-exterminator
          restartPolicy: Never
          containers:
            - name: kubectl
              image: bitnami/kubectl:1.22.3
              command:
                - 'kubectl'
                - 'get'
                - 'pods'
                - '--namespace=foo'

When running the above script it works, but when you want to run loop its get complicated, how can I do it inline?

CodePudding user response:

Here is something similar I did to cleanup rabbitmq instances once our helm chart was deleted (hyperkube image can run kubectl commands):

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "rabbitmq-cluster-operator.fullname" . }}-delete-instances
  namespace: {{ .Release.Namespace }}
  annotations:
    "helm.sh/hook": pre-delete
    "helm.sh/hook-weight": "1"
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
  labels:
    app: {{ template "rabbitmq-cluster-operator.name" . }}
    release: {{ .Release.Name }}
spec:
  template:
    metadata:
      name: {{ template "rabbitmq-cluster-operator.fullname" . }}-delete- instances
      labels:
        app: {{ template "rabbitmq-cluster-operator.name" . }}
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      restartPolicy: Never
      serviceAccountName: {{ template "rabbitmq-cluster-operator.serviceAccountName" . }}
      containers:
      - name: kubectl
        image: "{{ .Values.global.hyperkube.image.repository }}/{{ 
 .Values.global.hyperkube.image.name }}:{{ .Values.global.hyperkube.image.tag }}"
        imagePullPolicy: "{{ .Values.global.hyperkube.image.pullPolicy }}"
    command:
    - /bin/sh
    - -c
    - >
        kubectl get rabbitmqclusters | while read -r entry; do
          name=$(echo $entry | awk '{print $1}');
          kubectl delete rabbitmqcluster $name -n {{ .Release.Namespace }};
        done

Note this is a job but something similar can be done in a cronjob.

CodePudding user response:

In you case you can use something like this:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: restart
  namespace: foo
spec:
  concurrencyPolicy: Forbid
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      backoffLimit: 2
      activeDeadlineSeconds: 600
      template:
        spec:
          serviceAccountName: pod-exterminator
          restartPolicy: Never
          containers:
            - name: kubectl
              image: bitnami/kubectl:1.22.3
              command:
              - /bin/sh
              - -c
              - kubectl get pods -o name |  while read -r POD; do kubectl delete "$POD"; sleep 15; done

However, do you really need to wait 15 seconds? If you want to be sure that pod is gone before deleting next one, you can use --wait=true, so the command will become:

kubectl get pods -o name |  while read -r POD; do kubectl delete "$POD" --wait; done
  • Related