Home > Software design >  Two Kubernetes Deployments with exactly the same pod labels
Two Kubernetes Deployments with exactly the same pod labels

Time:10-29

Let's say I have two deployments which are exactly the same apart from deployment name:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-d
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
      - name: nginx
        image: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-d2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
      - name: nginx
        image: nginx

Since these two deployments have the same selectors and the same pod template, I would expect to see three pods. However, six pods are created:

# kubectl get pods --show-labels
NAME                        READY   STATUS    RESTARTS   AGE     LABELS
nginx-d-5b686ccd46-dkpk7    1/1     Running   0          4m16s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-nz7wf    1/1     Running   0          4m16s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-vdtfr    1/1     Running   0          4m16s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nqmq7   1/1     Running   0          4m16s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nzrlc   1/1     Running   0          4m16s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-qgjkn   1/1     Running   0          4m16s   app=mynginx,pod-template-hash=5b686ccd46

Why is that?

CodePudding user response:

Consider this: The pods are not directly managed by a deployment, but a deployment manages a ReplicaSet.

This can be validated using

kubectl get rs
NAME                  DESIRED   CURRENT   READY   AGE
nginx-d-5b686ccd46    3         3         3       74s
nginx-d2-7c76fbbbcb   3         3         0       74s

You choose which pods to consider for a replicaset or deployment by specifying the selector. In addition to that each deployment adds its own label to be able to discriminate which pods are managed by its own replicaset and which are managed by other replicasets.

You can inspect this as well:

kubectl get pods --show-labels  
NAME                        READY   STATUS    RESTARTS   AGE   LABELS
nginx-d-5b686ccd46-7j4md    1/1     Running   0          4m    app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-9j7tx    1/1     Running   0          4m    app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-zt4ls    1/1     Running   0          4m    app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-ddcr2   1/1     Running   0          75s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-fhvm7   1/1     Running   0          79s   app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-q99ww   1/1     Running   0          83s   app=mynginx,pod-template-hash=5b686ccd46

These are added to the replicaset as match labels:

spec:
  replicas: 3
  selector:
    matchLabels:
      app: mynginx
      pod-template-hash: 5b686ccd46

Since even these are identical you can inspect the pods and see that there is an owner reference as well:

kubectl get pod nginx-d-5b686ccd46-7j4md -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-10-28T14:53:17Z"
  generateName: nginx-d-5b686ccd46-
  labels:
    app: mynginx
    pod-template-hash: 5b686ccd46
  name: nginx-d-5b686ccd46-7j4md
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx-d-5b686ccd46
    uid: 7eb8fdaf-bfe7-4647-9180-43148a036184
  resourceVersion: "556"

More information on this can be found here: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/

So a deployment (and replicaset) can disambiguate which pods are managed by which and each ensure the desired number of replicas.

  • Related