I was using kubeadm to create my kubernetes cluster
sudo kubeadm init --pod-network-cidr=10.10.0.0/16
Message show it worked successfully
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
After the initialization, I type
kubectl get pods -A
But it show
The connection to the server 127.0.0.1:35583 was refused - did you specify the right host or port?
Does anybody know the answer??
CodePudding user response:
You can interact with kubectl
this way (for now):
sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
But it's not an efficient way. So we have three options:
- The file passed with
--kubeconfig
flag KUBECONFIG
environment variable- File located in
$HOME/.kube/config
folder
Option 1
is not efficient because every time you should pass this flag...
Option 2
is not efficient too because its only works in the current session...
But option number 3
is awesome and actually, it is best-practice...
- Create
.kube
foldermkdir -p ~/.kube
- Copy
admin.conf
to this foldersudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
- Change the owner of this file to yourself
sudo chown $(id -u):$(id -g) ~/.kube/config
- Now everything is good, and we don't have to use
sudo
or--kubeconfig
kubectl get nodes