Home > Blockchain >  Why does "kubectl apply" merge deployment descriptors after rollback
Why does "kubectl apply" merge deployment descriptors after rollback

Time:12-03

I am using Azure Kubernetes Services with K8S version 1.20.9 and have following K8S deployment

Version 1:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: test
  name: busybox-deployment
  labels:
    app: busybox
spec:
  replicas: 1
  strategy: 
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
        env:
          - name: KEY_1
            value: VALUE_1

I deploy with kubectl apply it and check the value of the KEY_1 environment variable - it is correctly set to VALUE_1.

Then I deploy 2 more versions (again via kubectl apply) where I change the key-value pair in the env section like this effectively deleting the old environment variable and creating a new one:

Version 2:

        env:
          - name: KEY_2
            value: VALUE_2

Version 3:

        env:
          - name: KEY_3
            value: VALUE_3

After each deploy I check the environment variables and they are fine - version 2 contains the KEY_2:VALUE_2 pair and version 3 contains the KEY_3:VALUE_3 pair.

Now I rollback to version 2 by invoking

kubectl rollout undo deployment ...

This is also correctly working and now we have the KEY_2:VALUE_2 pair as environment variables.

However, if I deploy version 3 again the container and the deployment descriptor have both the KEY_2:VALUE_2 and the KEY_3:VALUE as environment variables. This matches none of the deployed descriptors because they always contain only a single environment variable. Subsequent deployments result in the same behavior until I manually edit and delete the unnecessary property.

I read some explanations like this nice article where it is explained that kubectl apply would remove old properties, while patch would not.

However this does not happen when "undo" is performed. Any idea why?

Thank you.

CodePudding user response:

This is a known problem with rollout undo, reported here #94698, here #25236, and here #22512.

In layman terms - kubectl incorrectly calculates differences and merges changes, becacuse undo does not properly load previous configuration.

More about how K8s calculates differences can be found in the docs.

Workaround is to update manually last-applied-configuration before re-applying another deployment from config file

kubectl apply set-last-applied -f Deployment_v2.yaml -o yaml
  • Related