I made a persistent volume claim on kubernetes to save mongodb data after restarting the deployment I found that data is not existed also my PVC is in bound state.
here is my yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
and I made clusterIP service for the deployment
CodePudding user response:
First off, if the PVC status is still Bound
and the desired pod happens to start on another node, it will fail as the PV can't be mounted into the pod. This happens because the reclaimPolicy: Retain
of the StorageClass (can also be set on the PV directly persistentVolumeReclaimPolicy: Retain
). In order to fix this, you have to manually overrite/delete the claimRef
of the PV. Use kubectl patch pv PV_NAME -p '{"spec":{"claimRef": null}}'
to do this, after doing so the PV's status should be Available
.
In order to see if the your application writes any data to the desired path, run your application and exec into it (kubectl -n NAMESPACE POD_NAME -it -- /bin/sh
) and check your /data/db
. You could also create an file with some random text, restart your application and check again.
I'm fairly certain that if your PV isn't being recreated every time your application starts (which shouldn't be the case, because of Retain
), then it's highly that your Application isn't writing to the path specified. But you could also share your PersistentVolume
config with us, as there might be some misconfiguration there as well.