Something happened with mongodb replicaset in kubernetes cluster after primary restarts. I deployed mongodb replicaset using bitnami helm chart a month ago and it worked fine. I have the following setup:
- mongo-rs-0 (primamry)
- mongo-rs-1 (secondary)
- arbiter
In this installation authentication was enabled. But today something happened... One of the replicas with arbiter changed configuration and now the secondary has been detached somehow from my rs. I checked the data path in my secondary and it has 505MB, but my primary has 25 GB of data. Can I just re-add my detached mongo instance to replicaset?
CodePudding user response:
Since you have this deployed using Kubernetes you can do the following:
- delete the PVC claimed by
mongo-rs-1
. You can find this by describing the podkubectl describe po mongo-rs-1
. The volume name will be listed under volumes and typePersistentVolumeClaim
- delete the pod
mongo-rs-1
. Wait a little bit for the PVC to be recreated. When the pod schedules the PVC will be recreated but the pod may not find the PVC because it was created after the pod was scheduled - Delete the pod once so it can find the PVC