Home > Software design >  Cant mount local host path in local kind cluster
Cant mount local host path in local kind cluster

Time:10-29

Below is my kubernetes file and I need to do two things

  1. need to mount a folder with a file
  2. need to mount a file with startup script

I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.

kubernetes.yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: zookeeper
      spec:
        containers:
        - image: bitnami/zookeeper:3
          name: zookeeper
          ports:
          - containerPort: 2181
          env:
          - name: ALLOW_ANONYMOUS_LOGIN
            value: "yes"
          resources: {}
          volumeMounts:
          - mountPath: /bitnami/zookeeper
            name: bitnamidockerzookeeper-zookeeper-data
        restartPolicy: Always
        volumes:
        - name: bitnamidockerzookeeper-zookeeper-data
          persistentVolumeClaim:
            claimName: bitnamidockerzookeeper-zookeeper-data
  status: {}
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    creationTimestamp: null
    labels:
      io.kompose.service: bitnamidockerzookeeper-zookeeper-data
      type: local
    name: bitnamidockerzookeeper-zookeeper-data
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 100Mi
  status: {}
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: foo
  spec:
    storageClassName: manual
    claimRef:
      name: bitnamidockerzookeeper-zookeeper-data
    capacity:
      storage: 100Mi
    accessModes:
      - ReadWriteMany
    hostPath:
      path: "/tmp/zoo"
  status: {}
kind: List
metadata: {}

CodePudding user response:

A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod  #POD
  metadata:
    name: my-pod  #A RESOURCE NEEDS A NAME
    creationTimestamp: null
    labels:
      io.kompose.service: zookeeper
  spec:
    containers:
    - image: bitnami/zookeeper:3
      name: zookeeper
      ports:
      - containerPort: 2181
      env:
      - name: ALLOW_ANONYMOUS_LOGIN
        value: "yes"
      resources: {}
      volumeMounts:
      - mountPath: /bitnami/zookeeper
        name: bitnamidockerzookeeper-zookeeper-data
    restartPolicy: Always
    volumes:
    - name: bitnamidockerzookeeper-zookeeper-data
      persistentVolumeClaim:
        claimName: bitnamidockerzookeeper-zookeeper-data
  status: {}

Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run minikube ssh to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

CodePudding user response:

Little confuse, if you want to use file path on node as volume for pod, you should do as this:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory

but you need to make sure you pod will be scheduler the same node which has the file path.

  • Related