I have a pod, which is querying an AWS service using Boto. This pod runs in kubernetes cluster in EKS.
When running on a real cluster, we use ServiceAccount/Role/RoleBinding to give the Pod permission to acquire an IAM role.
But when I run it locally, using kind
, I want it to use credentials that are in my ~/.aws
directory.
So I mount the volumes as follows:
volumes:
- hostPath:
path: /var/run/datadog
type: ""
name: dsdsocket
- hostPath:
path: /Users/me/.aws
type: DirectoryOrCreate
name: aws
And use them in the pod as follows:
volumeMounts:
- mountPath: /var/run/datadog
name: dsdsocket
readOnly: true
- mountPath: /root/.aws
name: aws
readOnly: true
I have checked that there are credentials in ~/.aws/credentials
But the directory just shows up as empty inside the pod:
root@the_pod:/app# ls -al /root/.aws
total 8
drwxr-xr-x 2 root root 4096 Apr 12 19:33 .
drwx------ 1 root root 4096 May 9 17:22 ..
NOTE: I have tried mounting the actual credentials file in ~/.aws/credentials
too, but it doesnt mount either.
Any ideas what i am doing wrong?
CodePudding user response:
I guess you need to use extra mounts while creating the kind cluster, it is used to pass through storage on the host to a kind node for persisting data, mounting through code, etc.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /path/to/my/files/
containerPath: /files
# optional: if set, the mount is read-only.
# default false
readOnly: true
propagation: HostToContainer
Creating a kind cluster with a custom config file:
$ kind create cluster --config=kind-config.yaml
CodePudding user response:
Your kind cluster might be a single filesystem so it could be working there, while if you are on EKS or so your POD might scheduled on another node instead of on which your Cred file exists.