This is the mongodb yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
volumeMounts:
- mountPath: "/data/db/auth"
name: auth-db-storage
volumes:
- name: auth-db-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
And this is the persistent volume file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: mongo
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
I'm running this on Ubuntu using kubectl and minikube v1.25.1.
When I run describe pod, I see this on the mongodb pod.
Volumes:
auth-db-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-pvc
ReadOnly: false
I have a similar setup for other pods to store files, and it's working fine. But with mongodb, everytime I restart the pods, the data is lost. Can someone help me?
EDIT: I noticed that if I change the mongodb mountPath to "/data/db", it works fine. But if I have multiple mongodb pods running on "/data/db", they don't work. So I need to have one persitent volume claim for EACH mongodb pod?
CodePudding user response:
I noticed that if I change the mongodb mountPath to "/data/db", it works fine. But if I have multiple mongodb pods running on "/data/db", they don't work. So I need to have one persitent volume claim for EACH mongodb pod?
- When you scale a Deployment to more then one replicas , All the replicas will use same PVC . So as you are using a deployment you dont need to separate pvcs for each replica.
- When you are using PVC of Type "ReadWriteOnce". all the replicas should run on the same node only then all of your pods will be able to access the PVC. you can achieve it using nodeSelector (simple solution) in deployment or node/pod Affinity/Antiaffinity (efficient solution) in way that all the replicas get scheduled on the same node.
CodePudding user response:
When using these yaml files, you are mounting the /data/db
dir on the minikube node to /data/db/auth
in auth-mongo pods.
First, you should change /data/db/auth
to /data/db
in your k8s deployment so that your mongodb can read the database from the default db location.
Even if you delete the deployment, the db will stay in '/data/db' dir on the minikube node. And after running the new pod from this deployment, mongodb will open this existing db (all data saved).
Second, you can't use multiple mongodb pods like this by just scaling replicas in the deployment because the second mongodb in other Pod can't use already used by the first Pod db. Mongodb will throw this error:
Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory
So, the solution is either to use only 1 replica in your deployment or, for example, use MongoDB packaged by Bitnami helm chart. https://github.com/bitnami/charts/tree/master/bitnami/mongodb
This chart bootstraps a MongoDB(®) deployment on a Kubernetes cluster using the Helm package manager.
$ helm install my-release bitnami/mongodb --set architecture=replicaset --set replicaCount=2
Understand MongoDB Architecture Options.
Also, check this link MongoDB Community Kubernetes Operator.
This is a Kubernetes Operator which deploys MongoDB Community into Kubernetes clusters.