Home > Back-end >  Secret is not creating in AKS after fetching it with CSI Driver
Secret is not creating in AKS after fetching it with CSI Driver

Time:11-30

By using the reference of https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls this document, I'm trying to fetch the TLS secrets from AKV to AKS pods. Initially I created and configured CSI driver configuration with using User Assigned Managed Identity.

I have performed the following steps:

  • Create AKS Cluster with 1 nodepool.
  • Create AKV.
  • Created user assigned managed identity and assign it to the nodepool i.e. to the VMSS created for AKS.
  • Installed CSI Driver helm chart in AKS's "kube-system" namespace. and completed all the requirement to perform this operations.
  • Created the TLS certificate and key.
  • By using TLS certificate and key, created .pfx file.
  • Uploaded that .pfx file in the AKV certificates named as "ingresscert".
  • Created new namespace in AKS named as "ingress-test".
  • Deployed secretProviderClass in that namespace are as follows.:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: azure-tls
spec:
  provider: azure
  secretObjects:                            # secretObjects defines the desired state of synced K8s secret objects
  - secretName: ingress-tls-csi
    type: kubernetes.io/tls
    data: 
    - objectName: ingresscert
      key: tls.key
    - objectName: ingresscert
      key: tls.crt
  parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "true"
    userAssignedIdentityID: "7*******-****-****-****-***********1"
    keyvaultName: "*****-*****-kv"                 # the name of the AKV instance
    objects: |
      array:
        - |
          objectName: ingresscert
          objectType: secret
    tenantId: "e*******-****-****-****-***********f"                    # the tenant ID of the AKV instance
  • Deployed the nginx-ingress-controller helm chart in the same namespace, where certificates are binded with application.
  • Deployed the Busy Box deployment are as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-one
  labels:
    app: busybox-one
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox-one
  template:
    metadata:
      labels:
        app: busybox-one
    spec:
      containers:
        - name: busybox
          image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
          command:
            - "/bin/sleep"
            - "10000"
          volumeMounts:
            - name: secrets-store-inline
              mountPath: "/mnt/secrets-store"
              readOnly: true
      volumes:
        - name: secrets-store-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "azure-tls"
---
apiVersion: v1
kind: Service
metadata:
  name: busybox-one
spec:
  type: ClusterIP
  ports:
    - port: 80
  selector:
    app: busybox-one
  • Check secret is created or not by using command
kubectl get secret -n <namespaceName>

One thing to notice here is, if I attach shell with the busy box pod and go to the mount path which I provided to mount secrets I have seen that secrets are successfully fetched there. But this secrets are not showing in the AKS's secret list.

I have troubleshooted all the AKS,KV and manifest files but not found anything. IF there is anything I have missed or anyone has solution for this please let me know.

Thanks in advance..!!!

CodePudding user response:

Your config looks good to me. One thing to consider is, that the User Assigned Managed Identity should not be the one you created for the AKS, it should be the managed identity from your nodepool (kubelet) and it also needs permission on the AKV.

I had the same issues while using the wrong Managed identity.

userAssignedIdentityID = Kubelet Client Id ( Nodepool Managed Idendity )

AZ CLI

export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID

CodePudding user response:

i added this as a new answer, bcs the formatting was bad in the comments:

As you are using the Helm chart, you have to activate the secret sync in the values.yaml of the Helm Chart:

  syncSecret:
    enabled: true
  • Related