Home > Net >  Containers crashing with "CrashLoopBackOff" status in Minikube
Containers crashing with "CrashLoopBackOff" status in Minikube

Time:01-11

I am new to containerization and am having some difficulties. I have an application that consists of a React frontend, a Python backend using FastAPI, and PostgreSQL databases using SQL Alchemy for object-relational mapping. I decided to put each component inside a Docker container so that I can deploy the application on Azure in the future (I know that some people may have strong opinions on deploying the frontend and database in containers, but I am doing so because it is required by the project's requirements).

After doing this, I started working with Minikube. However, I am having problems where all the containers that should be running inside pods have the status "CrashLoopBackOff". From what I can tell, this means that the images are being pulled from Docker Hub, containers are being started but then failing for some reason.

I tried running "kubectl logs" and nothing is returned. The "kubectl describe" command, in the Events section, returns: "Warning BackOff 30s (x140 over 30m) kubelet Back-off restarting failed container."

I have also tried to minimize the complexity by just trying to run the frontend component. Here are my Dockerfile and manifest file:

Dockerfile:

FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]

manifest file .yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: xxxtest/my-first-repo:yyy-frontend
        ports:
        - containerPort: 3000

I do not have a service manifest yet, and I don't think it is related to this issue.

Can anyone provide any help or tips on how to troubleshoot this issue? I would really appreciate any guidance you can offer. Thank you in advance for your time and assistance!

Have a great day!

CodePudding user response:

This CrashLoopBackOff is related to a container error. If you want to fix this error, you need to see the container log, these is my tips:

  • The best practice in K8s is to redirect the application logs to /dev/stdout or /dev/stderr is not recommended redirect to a file so that you can use the kubectl logs <POD NAME>.

  • Try to clear the cache of your local containers, download and run the same image, and tag you configured in your deployment file.

  • If you need any environment variable to run the container locally, you'll also need those env's in your deployment file.

  • Always use the flag imagePullPolicy: Always mainly if you are using the same image tag. EDIT: Because the default image pull policy is IfNotPresent, if you fixed the container image, the k8s will not pull a new image version.

Docs:

  • Related