Home > Net >  how to run a job in each node of kubernetes instead of daemonset
how to run a job in each node of kubernetes instead of daemonset

Time:12-14

There is a kubernetes cluster with 100 nodes, I have to clean the specific images manually, I know the kubelet garbage collect may help, but it isn't applied in my case. After browsing the internet , I found a solution - docker in docker, to solve my problem.

I just wanna remove the image in each node one time, is there any way to run a job in each node one time?

I checked the kubernetes labels and podaffinity, but still no ideas, any body could help?

Also, I tried to use daemonset to solve the problem, but turns out that it can only remove the image for a part of nodes instead of all nodes, I don't what might be the problem...

here is the daemonset example:

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: test-ds
  labels:
    k8s-app: test
spec:
  selector:
    matchLabels:
      k8s-app: test
  template:
    metadata:
      labels:
        k8s-app: test
    spec:
      containers:
      - name: test
        env:
        - name: DELETE_IMAGE_NAME
          value: "nginx"
        image: busybox
        command: ['sh', '-c', 'curl --unix-socket /var/run/docker.sock -X DELETE http://localhost/v1.39/images/$(DELETE_IMAGE_NAME)']
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /var/run/docker.sock
          name: docker-sock-volume
        ports:
        - containerPort: 80
      volumes:
      - name: docker-sock-volume
        hostPath:
          # location on host
          path: /var/run/docker.sock

CodePudding user response:

If you want to run you job on single specific Node you can us the Nodeselector in POD spec

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: test
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: test
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
          nodeSelector: 
            name: node3

daemon set ideally should resolve your issues, as it creates the PODs on each available Node in the cluster.

You can read more about the affinity at here : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature, greatly expands the types of constraints you can express. The key enhancements are

The affinity/anti-affinity language is more expressive. The language offers more matching rules besides exact matches created with a logical AND operation;

You can use the Affinity in Job YAML something like

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/e2e-az-name
            operator: In
            values:
            - e2e-az1
            - e2e-az2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value
  containers:
  - name: with-node-affinity
    image: k8s.gcr.io/pause:2.0

CodePudding user response:

Also, I tried to use daemonset to solve the problem, but turns out that it can only remove the image for a part of nodes instead of all nodes

At what point in time you need to clean the images? DaemonSet pods are created at the time of node creation. Is it possible that when you create the DaemonSet, it cleans up the image in already running nodes. But in the new nodes, the image did not exist when DaemonSet pod was created?

  • Related