Home > database >  Why local persistent volumes not visible in EKS?
Why local persistent volumes not visible in EKS?

Time:06-24

In order to test if I can get self written software deployed in amazon using docker images, I have a test eks cluster. I have written a small test script that reads and writes a file to see if I understand how to deploy. I have successfully deployed it in minikube, using three replica's. The replica's all use a shared directory on my local file system, and in minikube that is mounted into the pods with a volume

The next step was to deploy that in the eks cluster. However, I cannot get it working in eks. The problem is that the pods don't see the contents of the mounted directory.

This does not completely surprise me, since in minikube I had to create a mount first to a local directory on the server. I have not done something similar on the eks server. My question is what I should do to make this working (if possible at all).

I use this yaml file to create a pod in eks:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: "pv-volume"
spec:
  storageClassName: local-storage
  capacity:
    storage: "1Gi"
  accessModes:
   - "ReadWriteOnce"
  hostPath:
    path: /data/k8s
    type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pv-claim"
spec:
  storageClassName: local-storage
  accessModes:
  - "ReadWriteOnce"
  resources:
    requests:
      storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
  name: ruudtest
spec:
  containers:
  - name: ruud
    image: MYIMAGE
    volumeMounts:
    - name: cmount
      mountPath: "/config"
  volumes:
  - name: cmount
    persistentVolumeClaim:
      claimName: pv-claim

So what I expect is that I have a local directory, /data/k8s, that is visible in the pods as path /config. When I apply this yaml, I get a pod that gives an error message that makes clear the data in the /data/k8s directory is not visible to the pod.

Kubectl gives me this info after creation of the volume and claim

[rdgon@NL013-PPDAPP015 probeer]$ kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS   REASON   AGE
persistentvolume/pv-volume                                  1Gi        RWO            Retain           Available                                              15s
persistentvolume/pvc-156edfef-d272-4df6-ae16-09b12e1c2f03   1Gi        RWO            Delete           Bound       default/pv-claim   gp2                     9s

NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pv-claim   Bound    pvc-156edfef-d272-4df6-ae16-09b12e1c2f03   1Gi        RWO            gp2            15s

Which seems to indicate everything is OK. But it seems that the filesystem of the master node, on which I run the yaml file to create the volume, is not the location where the pods look when they access the /config dir.

CodePudding user response:

On EKS, there's no storage class named 'local-storage' by default.

There is only a 'gp2' storage class, which is also used when you don't specify a storageClassName.

The 'gp2' storage class creates a dedicated EBS volume and attaches it your Kubernetes Node when required, so it doesn't use a local folder. You also don't need to create the pv manually, just the pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pv-claim"
spec:
  storageClassName: gp2
  accessModes:
  - "ReadWriteOnce"
  resources:
    requests:
      storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
  name: ruudtest
spec:
  containers:
  - name: ruud
    image: MYIMAGE
    volumeMounts:
    - name: cmount
      mountPath: "/config"
  volumes:
  - name: cmount
    persistentVolumeClaim:
      claimName: pv-claim

If you want a folder on the Node itself, you can use a 'hostPath' volume, and you don't need a pv or pvc for that:

apiVersion: v1
kind: Pod
metadata:
  name: ruudtest
spec:
  containers:
  - name: ruud
    image: MYIMAGE
    volumeMounts:
    - name: cmount
      mountPath: "/config"
  volumes:
  - name: cmount
    hostPath:
      path: /data/k8s

This is a bad idea, since the data will be lost if another node starts up, and your pod is moved to the new node.

If it's for configuration only, you can also use a configMap, and put the files directly in your kubernetes manifest files.

apiVersion: v1
kind: ConfigMap
metadata:
  name: ruud-config
data:
  ruud.properties: |
    my ruud.properties file content...
---
apiVersion: v1
kind: Pod
metadata:
  name: ruudtest
spec:
  containers:
  - name: ruud
    image: MYIMAGE
    volumeMounts:
    - name: cmount
      mountPath: "/config"
  volumes:
  - name: cmount
    configMap:
      name: ruud-config

CodePudding user response:

Please check whether the pv got created and its "bound" to PVC by running below commands

kubectl get pv

kubectl get pvc

Which will give information whether the objects are created properly

CodePudding user response:

The local path you refer to is not valid. Try:

apiVersion: v1
kind: Pod
metadata:
  name: ruudtest
spec:
  containers:
  - name: ruud
    image: MYIMAGE
    volumeMounts:
    - name: cmount
      mountPath: /config
  volumes:
  - name: cmount
    hostPath:
      path: /data/k8s
      type: DirectoryOrCreate  # <-- You need this since the directory may not exist on the node.
  • Related