Home > Enterprise >  Kubernetes Worker Node status NotReady status: node.kubernetes.io/unreachable:NoSchedule
Kubernetes Worker Node status NotReady status: node.kubernetes.io/unreachable:NoSchedule

Time:02-18

I have a Kubernetes Cluster setup using Kubeadm. Have a master node as well as a worker node. I have a few pods deployed to the worker node. The Worker node has got no taints and the pods have no toleration.

However, after some time I witnessed the pods were in pending and terminating state. When I ran kubectl get node worker-1, It seemed that the worker-1 node was in NotReady state.

When I ran kubectl describe node worker-1, it seems that automatically three taints were added to this worker node.

node.kubernetes.io/unreachable:NoExecute node.cloudprovider.kubernetes.io/shutdown:NoSchedule node.kubernetes.io/unreachable:NoSchedule

I am not sure what these automatically assigned taints mean and how it was added. After a while, these taints got removed automatically and the pods began to reschedule and start running again.

Since the pods didn't have any tolerance and suddenly these three taints were added to the worker node, the pods started terminating - I understood that part.

But why were these three taints added to the worker node in the first place and what's the condition for adding these taints automatically?
Does anyone know the answer to this?

CodePudding user response:

This doesn't seem to be initiated from k8s, your cloud provider has changed the underlying VM instance to shutdown (node.cloudprovider.kubernetes.io/shutdown). You should check your instance state details on your cloud provider console for the reason.

  • Related