Home > other >  entering "kubectl config use-context" to two different clusters at the same time causes is
entering "kubectl config use-context" to two different clusters at the same time causes is

Time:12-03

In jenkins jobs A and B both the jobs are executed on the same machine to two different clusters. When the "kubectl config use-context" command is entered in both the jobs. They error out with following error. How can this be handled.

looks like use-context changes the file and doing it at the same time from two jobs causes issues.

On job A:

  • kubectl config use-context arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/ABC error: error loading config file "/home/ubuntu/.kube/config": yaml: line 29: could not find expected ':'

On job B:

  • kubectl config use-context arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/CBD error: error loading config file "/home/ubuntu/.kube/config": yaml: line 29: could not find expected ':'

CodePudding user response:

You don't need to issue a "use-context" (which yes, does write to the $KUBECONFIG) -- kubectl has the --context argument that allows you to specify the context to use per invocation:

# job A
$ kubectl --context "arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/ABC" get nodes
# job B
$ kubectl --context "arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/CBD" get nodes

However, if you have a lot of those commands, that can get tedious. In that case, you may be happier merely copying the original $KUBECONFIG and then setting the KUBECONFIG env-var in the job to point to your local, effectively disposable, one:

cp ${KUBECONFIG:-$HOME/.kube/config} job-X.kubeconfig
export KUBECONFIG=$PWD/job-X.kubeconfig
# some copies of kubectl whine if the permissions are too broad
chmod 0600 $KUBECONFIG
# now your use-context is safe to perform
kubectl config use-context "arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/ABC"
kubectl get nodes
  • Related