Home > Blockchain >  How do I uncover the reason for "Pending" when my node.js app fails to deploy to Kubernete
How do I uncover the reason for "Pending" when my node.js app fails to deploy to Kubernete

Time:08-30

I wrote a simple node.js app that listens on a port and returns HTML. I can docker run the node.js app and, with port forwarding in place, hit it happily.

  docker run -p 7081:7081 split-server

Now I want to run the app in kubernetes. I am on a mac and set up minikube and virtual box. I also set up a local docker registry for my local app, using instructions found here.

It doesn't work no matter what combination of things I try. Pending. The describe is below. I think I'm close, but I just can't get useful debugging output from kubectl:

  kubectl describe pod split-server
Name:           split-server-68fc6cdcd-gpk5m
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=split-server
                pod-template-hash=68fc6cdcd
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/split-server-68fc6cdcd
Containers:
  app:
    Image:      split-server:latest
    Port:       7081/TCP
    Host Port:  0/TCP
    Environment:
      SPLIT_API_KEY:  <API KEY>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8lzd (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-f8lzd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  3m27s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

My YAML is...

apiVersion: v1
kind: Service
metadata:
  name: split-server
spec:
  selector:
    app: split-server
  ports:
    - port: 7081
      targetPort: 7081
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: split-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: split-server
  template:
    metadata:
      labels:
        app: split-server
    spec:
      containers:
        - name: app 
          image: 192.168.4.26:5000/split-server:latest
          #image: split-server:latest
          ports:
            - containerPort: 7081
          env:
            - name: SPLIT_API_KEY
              value: <API KEY> 
          imagePullPolicy: Always

And here is what docker has for its list of images:

docker images
REPOSITORY                             TAG             IMAGE ID       CREATED             SIZE
split-server                           latest          d2caa2d0c693   45 minutes ago      1.01GB
192.168.4.26:5000/local/split-server   latest          d2caa2d0c693   45 minutes ago      1.01GB

Where should I be hunting? What tools am I missing? kubectl logs comes back empty every time... should have a single line of logging if the app had come up properly.

CodePudding user response:

The minikube node is marked as unschedulable for some reason (manually or there is a problem), you can try to remove the taint:

kubectl taint nodes --all node.kubernetes.io/unschedulable-

or add a toleration on your pod:

apiVersion: v1
kind: Pod
metadata:
  name: ...
  ...
spec:
  containers:
  - name: ...
    ...
  tolerations:
  - key: "node.kubernetes.io/unschedulable"
    operator: "Exists"
    effect: "NoSchedule"

CodePudding user response:

Removing the taint got me past this problem... thanks Hussein!

kubectl patch nodes minikube --patch '{"spec":{"unschedulable": false}}'

On to the next one...

CodePudding user response:

Way to troubleshoot a pending pod is by looking at the events that you get when you describe the pod. In your case master node is marked unschedulable hence you are facing the issue.

The command to fix would be like what Hussein said also refer this page to get an idea how to troubleshoot a pending pod:

Troubleshooting pending pods

  • Related