Home > Software engineering >  Kubernetes Minio Volume Node Affinity Conflict
Kubernetes Minio Volume Node Affinity Conflict

Time:10-16

I have setup a testing k3d cluster with 4 agents and a server.

I have a storage class defined thus:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate

with a pv defined thus:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: basic-minio-storage
  labels:
    storage-type: object-store-path
spec:
  capacity:
    storage: 500Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/basic_minio
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k3d-test-agent-0

the pvc that I have defined is like:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # This name uniquely identifies the PVC. Will be used in deployment below.
  name: minio-pv-claim
  labels:
    app: basic-minio
spec:
  # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
  accessModes:
    - ReadWriteOnce
  storageClassName: local-storage
  resources:
    # This is the request for storage. Should be available in the cluster.
    requests:
      storage: 500Gi
  selector:
    matchLabels:
      storage-type: object-store-path


my deployment is like:


# Create a simple single node Minio linked to root drive
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: basic-minio
  namespace: minio
spec:
  selector:
    matchLabels:
      app: basic-minio
  serviceName: "basic-minio"
  template:
    metadata:
      labels:
        app: basic-minio
    spec:
      containers:
      - name: basic-minio
        image: minio/minio:RELEASE.2021-10-10T16-53-30Z
        imagePullPolicy: IfNotPresent
        args:
        - server
        - /data
        env:
        - name: MINIO_ROOT_USER
          valueFrom:
            secretKeyRef:
              name: minio-secret
              key: minio-root-user
        - name: MINIO_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: minio-secret
              key: minio-root-password 
        ports:
        - containerPort: 9000
        volumeMounts:
        - name: storage
          mountPath: "/data"
      volumes:
        - name: storage
          persistentVolumeClaim:
            claimName: minio-pv-claim
        


In my kubernetes dashboard, I can see the that PV is provisioned and ready. The PV has been setup and has bound to the PV.

But my pod shows the error: 0/5 nodes are available: 5 node(s) had volume node affinity conflict.

what is causing this issue and how can I debug it?

CodePudding user response:

Your (local) volume is created on the worker node k3d-test-agent-0 but none of your pod is scheduled to run on this node. This is not a good approach but if you must run in this way, you can direct all pods to run on this host:

...
spec:
  nodeSelector:
    kubernetes.io/hostname: k3d-test-agent-0
  containers:
  - name: basic-minio
...
  • Related