Home > Mobile >  Populating a Containers environment values with mounted configMap in Kubernetes
Populating a Containers environment values with mounted configMap in Kubernetes

Time:02-01

I'm currently learning Kubernetes and recently learnt about using ConfigMaps for a Containers environment variables.

Let's say I have the following simple ConfigMap:

apiVersion: v1
data:
  MYSQL_ROOT_PASSWORD: password
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: mycm

I know that a container of some deployment can consume this environment variable via:

kubectl set env deployment mydb --from=configmap/mycm

or by specifying it manually in the manifest like so:

containers:
- env:
  - name: MYSQL_ROOT_PASSWORD
    valueFrom:
      configMapKeyRef:
        key: MYSQL_ROOT_PASSWORD
        name: mycm

However, this isn't what I am after, since I'd to manually change the environment variables each time the ConfigMap changes.

I am aware that mounting a ConfigMap to the Pod's volume allows for the auto-updating of ConfigMap values. I'm currently trying to find a way to set a Container's environment variables to those stored in the mounted config map.

So far I have the following YAML manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: mydb
  name: mydb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mydb
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: mydb
    spec:
      containers:
      - image: mariadb
        name: mariadb
        resources: {}
        args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/config
        env:
          - name: MYSQL_ROOT_PASSWORD
            value: temp
      volumes:
      - name: config-volume
        configMap:
          name: mycm
status: {}

I'm attempting to set the MYSQL_ROOT_PASSWORD to some temporary value, and then update it to mounted value as soon as the container starts via args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]

As I somewhat expected, this didn't work, resulting in the following error:

/usr/local/bin/docker-entrypoint.sh: line 539: /export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD): No such file or directory

I assume this is because the volume is mounted after the entrypoint. I tried adding a readiness probe to wait for the mount but this didn't work either:

readinessProbe:
          exec:
            command: ["sh", "-c", "test -f /etc/config/MYSQL_ROOT_PASSWORD"]
          initialDelaySeconds: 5
          periodSeconds: 5

Is there any easy way to achieve what I'm trying to do, or is it impossible?

CodePudding user response:

So I managed to find a solution, with a lot of inspiration from this answer.

Essentially, what I did was create a sidecar container based on the alpine K8s image that mounts the configmap and constantly watches for any changes, since the K8s API automatically updates the mounted configmap when the configmap is changed. This required the following script, watch_passwd.sh, which makes use of inotifywait to watch for changes and then uses the K8s API to rollout the changes accordingly:

update_passwd() {
    kubectl delete secret mysql-root-passwd > /dev/null 2>&1
    kubectl create secret generic mysql-root-passwd --from-file=/etc/config/MYSQL_ROOT_PASSWORD
}

update_passwd

while true
do
    inotifywait -e modify "/etc/config/MYSQL_ROOT_PASSWORD"
    update_passwd
    kubectl rollout restart deployment $1
done

The Dockerfile is then:

FROM docker.io/alpine/k8s:1.25.6
RUN apk update && apk add inotify-tools
COPY watch_passwd.sh .

After building the image (locally in this case) as mysidecar, I create the ServiceAccount, Role, and RoleBinding outlined here, adding rules for deployments so that they can be restarted by the sidecar.

After this, I piece it all together to create the following YAML Manifest (note that imagePullPolicy is set to Never, since I created the image locally):

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: mydb
  name: mydb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mydb
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: mydb
    spec:
      serviceAccountName: secretmaker
      containers:
      - image: mysidecar
        name: mysidecar
        imagePullPolicy: Never
        command:
          - /bin/sh
          - -c
          - |
            ./watch_passwd.sh $(DEPLOYMENT_NAME)
        env:
          - name: DEPLOYMENT_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.labels['app']
        volumeMounts:
        - name: config-volume
          mountPath: /etc/config
      - image: mariadb
        name: mariadb
        resources: {}
        envFrom:
        - secretRef:
            name: mysql-root-passwd
      volumes:
      - name: config-volume
        configMap:
          name: mycm
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: secretmaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app: mydb
  name: secretmaker
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create", "get", "delete", "list"]
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app: mydb
  name: secretmaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: secretmaker
subjects:
- kind: ServiceAccount
  name: secretmaker
  namespace: default
---

It all works as expected! Hopefully this is able to help someone out in the future. Also, if anybody comes across this and has a better solution please feel free to let me know :)

  • Related