Home > Software design >  K3s kubeconfig authenticate with token instead of client cert
K3s kubeconfig authenticate with token instead of client cert

Time:12-21

I set up K3s on a server with:

curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -

Then I grabbed the kube config from /etc/rancher/k3s/k3s.yaml and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.

I then hooked up 2 more server nodes to the cluster for a High Availability setup using:

curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -

Now on my local machine again I run kubectl get pods (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my ~/.kube/config has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.

My ~/.kube/config:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: {omitted}
    server: https://my-cluster-hostname:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: {omitted}
    client-key-data: {omitted}

I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.

What I would like to do instead of using the client certificate approach is use token authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the Kubernetes Authenticating guide and specifically I tried creating a new service account and getting the token associated with it as described in the Service Account Tokens section but that also did not work. I also dug through K3s server config options to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.

Is this some limitation of K3s or am I just doing something wrong (likely)?

My kubectl version output:

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7 k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

CodePudding user response:

I figured out an approach that works for me by reading through the Kubernetes Authenticating Guide in more detail. I settled on the Service Account Tokens approach as it says:

Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.

My use is for outside the cluster.

First, I created a new ServiceAccount called cluster-admin:

kubectl create serviceaccount cluster-admin

I then created a ClusterRoleBinding to assign cluster-wide permissions to my ServiceAccount (I named this cluster-admin-manual because K3s already had created one called cluster-admin that I didn't want to mess with):

kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin

Now you have to get the Secret that is created for you when you created your ServiceAccount:

kubectl get serviceaccount cluster-admin -o yaml

You'll see something like this returned:

apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2021-12-20T15:55:55Z"
  name: cluster-admin
  namespace: default
  resourceVersion: "3973955"
  uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw

Get the Secret content with:

kubectl get secret cluster-admin-token-67jtw -o yaml

In that output you will see the data/token property. This is a base64 encoded JWT bearer token. Decode it with:

echo {base64-encoded-token} | base64 --decode

Now you have your bearer token and you can add a user to your ~/.kube/config with the following command. You can also paste that JWT into jwt.io to take a look at the properties and make sure you base64 decoded it properly.

kubectl config set-credentials my-cluster-admin --token={token}

Then make sure your existing context in your ~/.kube/config has the user set appropriately (I did this manually by editing my kube config file but there's probably a kubectl config command for it). For example:

- context:
    cluster: my-cluster
    user: my-cluster-admin
  name: my-cluster

My user in the kube config looks like this:

- name: my-cluster-admin
  user:
    token: {token}

Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.

Other resources I found helpful:

  • Related