Home > Mobile >  Kubernetes Statefulset Pod CrashLoopBack error with no error
Kubernetes Statefulset Pod CrashLoopBack error with no error

Time:01-02

My pod has many restarts every 30 seconds or so. I've tried describing and checking the logs, to no avail.

describe:


      /home/speechuser/start_worker.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 02 Jan 2023 16:02:31  0800
      Finished:     Mon, 02 Jan 2023 16:02:36  0800
    Ready:          False
    Restart Count:  3

  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  85s                default-scheduler  Successfully assigned default/workers-0 to minikube
  Normal   Pulled     30s (x4 over 84s)  kubelet            Container image "workers:b8f7df8648d3e395804485a5b95de0a6d1441394e2e7c151e8b90cd3d92" already present on machine
  Normal   Created    30s (x4 over 84s)  kubelet            Created container workers
  Normal   Started    30s (x4 over 84s)  kubelet            Started container workers
  Warning  BackOff    11s (x5 over 73s)  kubelet            Back-off restarting failed container

I've checked the container level logs, there are no signs of error as well. How can I debug this further?

CodePudding user response:

CrashLoopBackOff is a Kubernetes state that represents a Pod's restart loop: After starting a container in the Pod, it crashes and needs to be restarted repeatedly.

Kubernetes will stand by a rising back-off time between restarts to allow you an opportunity to fix the mistake. Therefore, CrashLoopBackOff is not an error in and of itself but rather a sign that something is wrong that is preventing a Pod from starting properly.

Keep in mind that the fact that its restartPolicy is set to Always (the default) or OnFailure is the reason why it is restarting. The Pod's containers are being restarted by the kubelet, which is reading this configuration and causing the loop. This behavior is actually beneficial because it allows us to identify the issue and debug it while also allowing missing resources to finish loading.

Refer this document to resolve your issue

CodePudding user response:

Here, I can see that your container is being terminated with exit code 0. That means it must have completed purposeful execution due to some reason. You should check out why your container is completing it's execution.May be there's a script running inside the container it's failing somewhere resulting in exiting container.

  • Related