Home > Mobile >  How can i set local volume using mongodb charts in k8s?
How can i set local volume using mongodb charts in k8s?

Time:09-21

I want deploy a mongodb chart using helm on my local dev environment. I found all the possibile values on bitnami, but is overwhelming! How can i configure something like that:

template:
metadata:
  labels:
    app: mongodb
spec:
  containers:
  - name: mongodb
    image: mongo
    ports:
    - containerPort: 27017
    volumeMounts:
    - name: mongo-data
      mountPath: /data/db/
  volumes:
    - name: mongo-data
      hostPath:
        path: /app/db

Using value.yml configuration file?

CodePudding user response:

The best approach here is to deploy something like the Bitnami MongoDB chart that you reference in the question with its default options

helm install mongodb bitnami/mongodb

The chart will create a PersistentVolumeClaim for you, and a standard piece of Kubernetes called the persistent volume provisioner will create the corresponding PersistentVolume. The actual storage will be "somewhere inside Kubernetes", but for database storage there's little you can do with the actual files directly, so this isn't usually a practical problem.

If you can't use this approach, then you need to manually create the storage and then tell the chart to use it. You need to create a pair of a PersistentVolumeClaim and a PersistentVolume, for example as shown in the start of Kubernetes Persistent Volume and hostpath, and manually submit these using kubectl apply -f pv-pvc.yaml. You then need to tell the Bitnami chart about that PersistentVolume:

helm install mongodb bitnami/mongodb \
  --set persistence.existingClaim=your-pvc-name

I'd avoid this sequence in a non-development environment. The cluster should normally have a persistent volume provisioner set up and so you shouldn't need to manually create PersistentVolumes, and host-path volumes are unreliable in multi-node environments (they refer to a fixed path on whichever node the pod happens to be running on, so data can get misplaced if a pod is rescheduled on a different node).

CodePudding user response:

You need first to create a persistent volume claim, where it will create a persistent volume only if needed by a specific deployement( here you mondb helm chart):

kubectl -n $NAMESPACE apply -f persistent-volume-claim.yaml

For example: (or check https://kubernetes.io/docs/concepts/storage/persistent-volumes/)

#persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-data
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: default
  resources:
    requests:
      storage: 10Gi

Check your volume is well created

kubectl -n $NAMESPACE get pv

Now, Even if you delete your mongodb, your volume will persist and can be accessed by any other deployment

  • Related