Home > Net >  Why my pod error "Back-off restarting failed container" when I have `imagePullPolicy: &quo
Why my pod error "Back-off restarting failed container" when I have `imagePullPolicy: &quo

Time:12-21

Why my pod error "Back-off restarting failed container" when I have imagePullPolicy: "Always", Before It worked but today I deploy it on other machine, it show that error

My Yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: couchdb
  labels:
    app: couch
spec:
  replicas: 3
  serviceName: "couch-service"
  selector:
    matchLabels:
      app: couch
  template:
    metadata:
      labels:
        app: couch # pod label
    spec:
      containers:
      - name: couchdb
        image: couchdb:2.3.1
        imagePullPolicy: "Always"
        env:
        - name: NODE_NETBIOS_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: NODENAME
          value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
        - name: COUCHDB_USER
          value: admin
        - name: COUCHDB_PASSWORD
          value: admin
        - name: COUCHDB_SECRET
          value: b1709267
        - name: ERL_FLAGS
          value: "-name couchdb@$(NODENAME)"
        - name: ERL_FLAGS
          value: "-setcookie b1709267" #   the “password” used when nodes connect to each other.
        ports:
        - name: couchdb
          containerPort: 5984
        - name: epmd
          containerPort: 4369
        - containerPort: 9100
        volumeMounts:
          - name: couch-pvc
            mountPath: /opt/couchdb/data
  volumeClaimTemplates:
  - metadata:
      name: couch-pvc
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi
      selector:
        matchLabels:
          volume: couch-volume      

I describe it:

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  23s                default-scheduler  Successfully assigned default/couchdb-0 to b1709267node1
  Normal   Pulled     17s                kubelet            Successfully pulled image "couchdb:2.3.1" in 4.368553213s
  Normal   Pulling    16s (x2 over 22s)  kubelet            Pulling image "couchdb:2.3.1"
  Normal   Created    10s (x2 over 17s)  kubelet            Created container couchdb
  Normal   Started    10s (x2 over 17s)  kubelet            Started container couchdb
  Normal   Pulled     10s                kubelet            Successfully pulled image "couchdb:2.3.1" in 6.131837401s
  Warning  BackOff    8s (x2 over 9s)    kubelet            Back-off restarting failed container

What shound I do? Thanks

CodePudding user response:

ImagePullPolicy doesn't really have much to do with container restarts. It only determines on what occasion should the image be pulled from the container registry, read more here

If a container in a pod keeps restarting - it's usually because there is some error in the command that is the entrypoint of this container. There are 2 places where you should be able to find additional information that should point you to the solution:

  • logs of the pod (check using kubectl logs _YOUR_POD_NAME_ command)
  • description of the pod (check using kubectl describe _YOUR_POD_NAME_ command)

CodePudding user response:

The CouchDB k8s sample that you are using is out dated already and contained bug (eg. ERL_FLAGS was defined twice). You should use CouchDB helm chart instead. A basic CouchDB can be install with:

helm repo add couchdb https://apache.github.io/couchdb-helm

helm install couchdb couchdb --set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)

kubectl get secret couchdb-couchdb -o go-template='{{ .data.adminPassword }}' | base64 -d
  • Related