I have created a Kubernetes cluster with 2 nodes, on Master node and on Worker node (2 different VMs).
The worker node has joined succesfully the cluster, so when i run the commana
kubectl get nodes
in my master node it appears the 2 nodes exists in the cluster!
However, when i run the command kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
from my worker node terminal, in order to create a deployment in the worker node, i have the following error:
The connection to the server localhost:8080 was refused. - did you specify the right host or port?
Any help what is going on here?
CodePudding user response:
The easy way to do it is to copy the config from master node usually found here : /etc/kubernetes/admin.conf , to whetever node you want to configure kubectl ( even on master node) . The location to be copied is : $HOME/.kube/config
Also, you can this command from master node by specify nodeselector or label.
CodePudding user response:
It looks like you have issue with your kubeconfig
file, as usually localhost:8080
is the default server to connect in absence of this file . Generally, Kubernetes uses this file to store cluster authentication information and a list of contexts to which kubectl
refers when running commands - that's why kubectl
can't work properly without this file.
To check the presence of kubeconfig file, enter this command: kubectl config view
.
Or just check the presence of the file named config
in the $HOME/.kube
directory, which is the default location for kubeconfig
file.
If it is absent, you would need to copy the config file to your node, e.g.:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo service kubelet restart
It is also possible to generate config file in a more difficult way - instead of copying - as described here.