Home > Net >  When does a Pod get destroyed?
When does a Pod get destroyed?

Time:10-05

Pod lifecycle is managed by Kubelet, in data plane.

As per the definition: If the liveness probe fails, the kubelet kills the container

Pod is just a container with dedicated network namespace & IPC namespace with a sandbox container.


Say, if the Pod is single app container Pod, then, upon liveness failure,

Does kubelet kill the Pod?

or

Does kubelet kill the container(only) within the Pod?

CodePudding user response:

A pod is indeed the smallest element in Kubernetes, but that does not mean it is in fact "empty" without a container.

In order to spawn a pod and therefore the container elements needed to attach further containers a very small container is created using the pause image. This is used to allocate an IP that is then used for the pod. Afterward the init-containers or runtime container declared for the pod are started.

If the lifeness probe fails, the container is restarted. The pod survives this. This is even important: You might want to get the logs of the crashed/restarted container afterwards. This would not be possible, if the pod was destroyed and recreated.

CodePudding user response:

The kubelet uses liveness probes to know when to restart a container (NOT the entire Pod). If the liveness probe fails, the kubelet kills the container, and then the container may be restarted, however it depends on its restart policy.


I've created a simple example to demonstrate how it works.

First, I've created an app-1 Pod with two containers (web and db). The web container has a liveness probe configured, which always fails because the /healthz path is not configured.

$ cat app-1.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: app-1
  name: app-1
spec:
  containers:
  - image: nginx
    name: web
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: Custom-Header
          value: Awesome
  - image: postgres
    name: db
    env:
    - name: POSTGRES_PASSWORD
      value: example

After applying the above manifest and waiting some time, we can describe the app-1 Pod to check that only the web container has been restarted and the db container is running without interruption:
NOTE: I only provided important information from the kubectl describe pod app-1 command, not the entire output.

$ kubectl apply -f app-1.yml
pod/app-1 created
    
$ kubectl describe pod app-1
    
Name:         app-1
...
Containers:
  web:
...
    Restart Count:  4   <--- Note that the "web" container was restarted 4 times
    Liveness:       http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
...
  db:
...
    Restart Count:  0   <--- Note that the "db" container works fine
...
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
...
  Normal   Killing    78s (x2 over 108s)   kubelet            Container web failed liveness probe, will be restarted
...

We can connect to the db container to see if it is running:
NOTE: We can use the db container even when restarting the web container.

$ kubectl exec -it app-1 -c db -- bash
root@app-1:/#

In contrast, after connecting to the web container, we can observe that the liveness probe restarts this container:

$ kubectl exec -it app-1 -c web -- bash
root@app-1:/# command terminated with exit code 137
  • Related