Home > Blockchain >  Reading in values from /mnt/secrets-store/ after integration AKV with AKS using CSI Driver
Reading in values from /mnt/secrets-store/ after integration AKV with AKS using CSI Driver

Time:03-21

I have AKV integrated with AKS using CSI driver (documentation).

I can access them in the Pod by doing something like:

## show secrets held in secrets-store
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/

## print a test secret 'ExampleSecret' held in secrets-store
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret

I have it working with my PostgreSQL deployment doing the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-deployment-prod
  namespace: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      component: postgres
  template:
    metadata:
      labels:
        component: postgres
        aadpodidbinding: aks-akv-identity
    spec:
      containers:
        - name: postgres
          image: postgres:13-alpine
          ports:
            - containerPort: 5432
          env: 
            - name: POSTGRES_DB_FILE
              value: /mnt/secrets-store/PG-DATABASE
            - name: POSTGRES_USER_FILE
              value: /mnt/secrets-store/PG-USER
            - name: POSTGRES_PASSWORD_FILE
              value: /mnt/secrets-store/PG-PASSWORD
            - name: POSTGRES_INITDB_ARGS
              value: "-A md5"
            - name: PGDATA
              value: /var/postgresql/data
          volumeMounts:
          - name: postgres-storage-prod
            mountPath: /var/postgresql
          - name: secrets-store01-inline
            mountPath: /mnt/secrets-store
            readOnly: true
      volumes:
        - name: postgres-storage-prod
          persistentVolumeClaim:
            claimName: postgres-storage-prod
        - name: file-storage-prod
          persistentVolumeClaim:
            claimName: file-storage-prod
        - name: secrets-store01-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-cluster-ip-service-prod
  namespace: prod
spec:
  type: ClusterIP
  selector:
    component: postgres
  ports:
    - port: 5432
      targetPort: 5432

Which works fine.

Figured all I'd need to do is swap out stuff like the following:

- name: PGPASSWORD
  valueFrom:
    secretKeyRef:
      name: app-prod-secrets
      key: PGPASSWORD

For:

- name: POSTGRES_PASSWORD
  value: /mnt/secrets-store/PG-PASSWORD

# or
- name: POSTGRES_PASSWORD_FILE
  value: /mnt/secrets-store/PG-PASSWORD

And I'd be golden, but that does not turn out to be the case.

In the Pods it is reading in the value as a string, which makes me confused about two things:

  1. Why does this work for the PostgreSQL deployment but not my Django API, for example?
  2. Is there a way to add them in env: without turning them in secrets and using secretKeyRef?

CodePudding user response:

The CSI Driver injects the secrets in the pod by placing them as files on the file system. There will be one file per secret where

  • The filename is the name of the secret (or the alias specified in the secret provider class)
  • The content of the file is the value of the secret.

The CSI does not create environment variables of the secrets. The recomended way to add secrets as environment variables is to let CSI create a Kubernetes secret and then use the native secretKeyRef construct

Why does this work for the PostgreSQL deployment but not my Django API, for example?

In you Django API app you set an environment variable POSTGRES_PASSWORD to the value /mnt/secrets-store/PG-PASSWORD. i.e you simply say that a certain variable should contain a certain value, nothing more. Thus the variable ill contaain the pat, not the secret value itself.

The same is true for the Postgres deployment it is just a path in an environment variable. The difference lies within how the Postgres deployment interprets the value. When the environment variables ending in _FILE is used Postgres does not expect the environment variable itself to contain the secret, but rather a path to a file that does. From the docs of the Postgres image:

As an alternative to passing sensitive information via environment variables, _FILE may be appended to some of the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files. For example:

$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres

Currently, this is only supported for POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.

Is there a way to add them in env: without turning them in secrets and using secretKeyRef? No, not out of the box. What you could do is to have an entrypoint script in your image that reads all the files in your secret folder and sets them as environment variables (The name of the variables being the filenames and the value the file content) before it starts the main application. That way the application can access the secrets as environment variables.

  • Related