Home > Net >  DaemonSet with two different images. But only use one of them
DaemonSet with two different images. But only use one of them

Time:10-27

What i want to achieve: Let say that i have a physical cluster consist of 20 worker nodes, and the customer want to rollout a new version of an docker-image to DaemonSet. But, the costumer does not want to rollout to the entire cluster, they want to dedicate the update to just 3 nodes "pilot" nodes. We use keel to automatically update the image. Is there a way to just update these pilots with the new image, and let the other 17 nodes use the "old" image?

We have a k8s cluster with a DeamonSet with nodeSelector=worker that "installs" a pod with a specific container. I don't see how i can achieve this without using two different DeamonSets. Is there any solution to this problem i have.

I don't really know how to tackle this at all and have search the internet for some solutions. But could not find anything.

CodePudding user response:

You could use the OnDelete update strategy for your daemonset (docs):

With OnDelete update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.

So you could delete the pods on the pilot nodes manually and kubernetes would redeploy the ones with the new image. This has a few drawbacks, in case a "non-pilot" node gets rebooted in the meantime, it will also get the new image - and it has some manual steps involved.

Another way would be to deploy two daemonset with different images, label the pilot nodes and let the daemonset with the new image only be deployed to the pilot nodes and the old image only to the non-pilot ones (via nodeaffinity). E.g.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: new-version
spec:
  selector:
    matchLabels:
    - image: image:new-version
      name: ...
  template:
    metadata:
      labels:
        name: new-version
    spec:
      containers:
      - name: ...
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: my-pilot-node-label
                operator: Exists

And the "old" daemonset:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: old-version
spec:
  selector:
    matchLabels:
      name: old-version
  template:
    metadata:
      labels:
        name: old-version
    spec:
      containers:
      - image: image:olb-version
        name: ...
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: my-pilot-node-label
                operator: DoesNotExist

CodePudding user response:

For the case that you need to keep a single daemon set, a solution might be to run two containers in the pods of the daemon set and to decide based on the node name which container actually runs the original entry point on the image, while the other simply runs an idle-loop.

Even though this is a more or less "hacky" solution and not what sidecar containers have been designed for, it might fit the purpose.

Notice: One downside of this solution is that you can not expose the same port on both containers. If you need to expose a port, then you would need to expose them on different ports and run a third container which exposes the original port and does some port forwarding to localhost to the port of production respectively the pilot container.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: canary-daemon-set
spec:
  selector:
    matchLabels:
      uuid: "bd960882-5532-11ed-9bbb-3fa21b1d20ae"
  template:
    metadata:
      labels:
        uuid: "bd960882-5532-11ed-9bbb-3fa21b1d20ae"
    spec:
      containers:
      - name: production
        image: bash:5.1
        command:
        - bash
        - -c
        # run entry point as defined in image, if run on production nodes (decide based
        # on node name and selected pilot nodes whether to run the entrypoint according
        # the image or an idle-loop)
        - if echo $PILOT_NODES | grep -w -q $NODE_NAME; then echo not starting, this is a pilot node; while true; do sleep 10; done; else bash --version; echo run original command here; while true; do sleep 10; done; fi
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: PILOT_NODES
          value: "worker18 worker19 worker20"
      - name: pilot
        image: bash:5.2
        command:
        - bash
        - -c
        # run entry point as defined in image, if run on pilot nodes (decide based
        # on node name and selected pilot nodes whether to run the entrypoint according
        # the image or an idle-loop)
        - if ! echo $PILOT_NODES | grep -w -q $NODE_NAME; then echo not starting, this is a pilot node; while true; do sleep 10; done; else bash --version; echo run original command here; while true; do sleep 10; done; fi
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: PILOT_NODES
          value: "worker18 worker19 worker20"
  • Related