Home > database >  Write Environment Variable to Local File using Kubernetes
Write Environment Variable to Local File using Kubernetes

Time:08-17

Is it possible write write an existing environment variable into a file from a Kubernetes deployment.yaml file?

The background: I've already parsed a json containing secrets. Now, I'd like to store that secret in a local file.

So far, I've tried something like this:

      lifecycle:
        postStart:
          exec:
            command: ["/bin/sh", "-c"],
            args: ["echo $PRIVATE_KEY > /var/private.key"] 

( I've setup /var/ as an empty writeVolume. )

Or perhaps there is a completely different way to do this, such as storing the secret in it's own, separate secret?

CodePudding user response:

Rather than using postStart , I'd suggest you use an init container, the postStart hook doesn't guarantee that it will be executed before the container ENTRYPOINT.

You can define your environment variables in your deployment manifest, by setting static values or referencing a configMap or secret. Your init container would run a bash script that writes the content of each variable to a file.

A second approach would be to mount a configMap as a volume inside your pod, e.g.:

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  SPECIAL_LEVEL: very
  SPECIAL_TYPE: charm
apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "ls /etc/config/" ]
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: special-config
  restartPolicy: Never

That would create two files inside /etc/config, named as the key defined in your configMap with the content of its value.

CodePudding user response:

Usually when we need to read some secrets from a secret manager, we use an init container, and we create an emptyDir shared between the pods to write the secrets and access them from the other containers. In this case you can use a different docker image with secret manager dependencies and creds, without install those dependencies and provide the creds to the main container:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  initContainers:
  - name: init-container
    image: alpine
    command:
    - /bin/sh
    - -c
    - 'echo "test_value" > /mnt/volume/var.txt'
    volumeMounts:
    - mountPath: /mnt/volume
      name: shared-storage
  containers:
  - image: alpine
    name: test-container
    command:
    - /bin/sh
    - -c
    - 'READ_VAR=$(cat /mnt/volume/var.txt) && echo "main_container: ${READ_VAR}"'
    volumeMounts:
    - mountPath: /mnt/volume
      name: shared-storage
  volumes:
  - name: shared-storage
    emptyDir: {}

Here is the log:

$ kubectl logs test-pd
main_container: test_value
  • Related