Home > Software design >  master do not recognize accessable workers in k8s
master do not recognize accessable workers in k8s

Time:06-28

I have create a cluster with 1 Master and 3 worker with VM in a VLan for test.

when i power on master's Vm and after that turn on one worker's VM. the master do not recognize the worker and it stay on "not ready" status.

should i restart an specific service or any other way?

Thank you for your attention.

CodePudding user response:

Try this out and reply what is the results

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
  • Try to "taint" your master (replace the nodes with your node or leave it for all of them
kubectl taint nodes --all node-role.kubernetes.io/master-
  • Add label to your other nodes (not master)
kubectl label nodes <node> node-role.kubernetes.io/worker=

CodePudding user response:

Please start kubelet in the worker node after the restart.

To survive restart the kubelet can be added to systemd service in case of Linux machines so that after the machine is restarted the service will be automatically started.

systemctl enable kubelet
  • Related