I'm getting the below error when trying to run kubectl logs : dial tcp :10250: i/o timeout
Running get pods/describe works fine, it is only when I try to get logs that I receive this error. I have tried restarting the cluster/tunnelfront pod etc. to no avail. Does anyone know what could be causing or what I can check in the cluster to help resolve?
Thanks,
CodePudding user response:
your issue may caused by the problem described in this troubleshooting. But maybe you can try the below solution which is provided in Microsoft Document.
These timeouts may be related to internal traffic between nodes being blocked. Verify that this traffic is not being blocked, such as by network security groups on the subnet for your cluster's nodes.
CodePudding user response:
I am having the same error but on Ubuntu master with 2 worker nodes on openstack. As soon as I run kubectl exec -it (pod). I get the error Error from server: error dialing backend: dial tcp 10.0.3.123:10250: i/o timeout. 10.0.3.123 being the internal IP of master.
I have tried this troubleshooting
.." tcp <Node_IP>:10250: i/o timeout These timeouts may be related to internal traffic between nodes being blocked. Verify that this traffic is not being blocked, such as by network security groups on the subnet for your cluster's nodes."
By changing my security group to default but NO success. I also tried to I allowed all tcp connections from and to my CIDR but still nothing is changing. I have tried different stable versions of both docker and K8s, but still no success. I will surely be grateful if someone can provide a resource to trouble shoot this issue. Thank you.