I did kubeadm init on one machine. I followed all the instructions on network etc and end up with this:
kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
slchvdvcybld001 Ready control-plane 140m v1.24.2
slchvdvcydtb001 Ready <none> 136m v1.24.2
slchvdvcytst001 Ready <none> 137m v1.24.2
As you can see, no nodes are Master or worker or similar.
I don't have some special setup, all I did is install it and did init.
There are no errors in logs file. Dashboard is in GREEN and everything is in green.
These are versions of kubectl and so on:
Client Version: v1.24.2
Kustomize Version: v4.5.4
Server Version: v1.24.2
CodePudding user response:
Labelling of master node is deprecated. That's where when using kubectl get nodes
its showing role as "control-plane" instead of "control-plane,master"
More details are in following link Kubeadm: http://git.k8s.io/enhancements/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md
CodePudding user response:
Kubernetes's kube-scheduler doesn't require particular node labels to consider them as feasible. The only exception is the control plane label node-role.kubernetes.io/control-plane
applied by kubeadm during the installation process.
In your case, just manually add the worker label with:
kubectl label nodes slchvdvcydtb001 node-role.kubernetes.io/worker=