I'm starting out with K8s and I'm stuck at setting up mongo db in replica set mode with local persistent volume. I'm using StorageClass, PersistentVolume and PersistentVolumeClaim.
vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mongo-pv 1Gi RWO Retain Available mongo-storage 24m
but when inspect the pod I get
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
This post answer https://stackoverflow.com/a/70069138/2704032 confirmed my suspect that I might be using the wrong label.. So I had a look at the PV and I see that as I've set nodeAffinity as
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernets.io/hostname
operator: In
values:
- docker-desktop
it's looking for
Node Affinity:
Required Terms:
Term 0: kubernets.io/hostname in [docker-desktop]
I checked nodes with kubectl get nodes --show-labels
and it does have that label as the output shows
NAME STATUS ROLES AGE VERSION LABELS
docker-desktop Ready control-plane 7d9h v1.24.1 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=docker-desktop,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
I tried using volumeClaimTemplate in the StatefulSet as
volumeClaimTemplates:
- metadata:
name: mongo-vctemplate
spec:
storageClassName: mongo-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
but it didn't make a difference..I also tried to specify the pic in the pv with the claimRef parameter but still the insidious error comes up at pod creation..
What else can I check or do I need to setup? Many thanks as usual Here are my yaml files
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mongo-storage
provisioner: kubernetes.io/no-provisioner
# volumeBindingMode: Immediate
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
capacity:
storage: 1Gi
# persistentVolumeReclaimPolicy: Retain # prod
persistentVolumeReclaimPolicy: Delete # local tests
storageClassName: mongo-storage
# claimRef:
# name: mongo-pvc
accessModes:
- ReadWriteOnce
# volumeMode: Filesystem #default if omitted
# hostPath:
# path: /mnt/data
local:
path: /mnt/disk/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernets.io/hostname
operator: In
values:
- docker-desktop
PVC
piVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo-storage
# volumeName: mongo-pv # this will make it unbundable???
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-statefulset
spec:
selector:
matchLabels:
app: mongo-pod # has to match .spec.template.metadata.labels
serviceName: mongo-clusterip-service
replicas: 1 # 3
template:
metadata:
labels:
app: mongo-pod # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo-container
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-pv-cont
mountPath: /data/db #/mnt/data
volumes:
- name: mongo-pv-cont
persistentVolumeClaim:
claimName: mongo-pvc
CodePudding user response:
It is a typo in kubernets.io/hostname
. It should be kubernetes.io/hostname
in the pv definition.
similar to this one: Error while using local persistent volumes in statefulset pod