I am trying to make a local volume to save logs from a pod to their node in an isolated environment. So, I am trying to make a PV and a PVC on the specific node that has the tier=production
tag. I have labeled the node with the tag:
$ k get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
xxx Ready Worker 12d v1.25.2 <lots of lables>,tier=production
Following the Local Volume and the Storage Class docs, I have created the following yaml to deploy the volume, claim, and my pod:
---
# A storage class to define local storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
# Volume using a local filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/nginx/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: tier
operator: In
values:
- production
---
# Request a claim on the file system volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
selector:
matchExpressions:
- key: tier
operator: In
values:
- production
---
# Make a pod that uses the volume
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: tier
operator: In
values:
- production
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: volume-claim
My Volume is available but the PVC is waiting for first consumer to be created before binding
which is expected since the Storage Class is set to WaitForFirstConsumer
. But my pod is never scheduled, it gives the following warning:
Warning FailedScheduling 8m25s default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
If I remove the volume information from the yaml above, it deploys just fine so I don't know if it is a problem with the pod or something else. How do I get the pod to use the volumes?
CodePudding user response:
Try:
...
# Volume using a local filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume-pv
labels: # <-- add for your PVC selector to match
tier: production
...