Home > OS >  Kubernetes YAM file error validating data
Kubernetes YAM file error validating data

Time:07-13

Hi there I am trying to do a lab of kubernetes but I am stuck in a step where I need to deploy a yaml file.

"5. Create a job that creates a pod, and issues the etcdctl snapshot save command to back up the cluster:"

I think my yaml file has some errors with the spaces (I am new with yaml files) I have checked documentation but I can not find the mistake.

This is the content of the file:

apiVersion: batch/v1
kind: Job
metadata:
  name: backup
  namespace: management
spec:
  template:
    spec:
      containers:
      # Use etcdctl snapshot save to create a snapshot in the /snapshot directory
      - command:
        - /bin/sh
        args:
        - -ec
        - etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key snapshot save /snapshots/backup.db
        # The same image used by the etcd pod
        image: k8s.gcr.io/etcd-amd64:3.1.12
        name: etcdctl
        env:
        # Set the etcdctl API version to 3 (to match the version of etcd installed by kubeadm)
        - name: ETCDCTL_API
          value: '3'
        volumeMounts:
        - mountPath: /etc/kubernetes/pki/etcd
          name: etcd-certs
          readOnly: true
        - mountPath: /snapshots
          name: snapshots
      # Use the host network where the etcd port is accessible (etcd pod uses hostnetwork)
      # This allows the etcdctl to connect to etcd that is listening on the host network
      hostNetwork: true
      affinity:
        # Use node affinity to schedule the pod on the master (where the etcd pod is)
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
      restartPolicy: OnFailure
      tolerations:
      # tolerate the master's NoSchedule taint to allow scheduling on the master
      - effect: NoSchedule
        operator: Exists
      volumes:
      # Volume storing the etcd PKI keys and certificates
      - hostPath:
        path: /etc/kubernetes/pki/etcd
        type: DirectoryOrCreate
        name: etcd-certs
      # A volume to store the backup snapshot
      - hostPath:
        path: /snapshots
        type: DirectoryOrCreate
        name: snapshots

this is the error I am getting:

johsttin@umasternode:~$ kubectl create -f snapshot.yaml
error: error validating "snapshot.yaml": error validating data: [ValidationError(Job.spec.template.spec.volumes[0]): unknown field "path" in io.k8s.api.core.v1.Volume, ValidationError(Job.spec.template.spec.volumes[0]): unknown field "type" in io.k8s.api.core.v1.Volume, ValidationError(Job.spec.template.spec.volumes[1]): unknown field "path" in io.k8s.api.core.v1.Volume, ValidationError(Job.spec.template.spec.volumes[1]): unknown field "type" in io.k8s.api.core.v1.Volume]; if you choose to ignore these errors, turn validation off with --validate=false

Can someone help me with this? Thanks in advance

CodePudding user response:

The hostPath indent is incorrect at volumes section:

...
volumes:
# Volume storing the etcd PKI keys and certificates
- name: etcd-certs
  hostPath:
    path: /etc/kubernetes/pki/etcd
    type: DirectoryOrCreate
    
# A volume to store the backup snapshot
- name: snapshots
  hostPath:
    path: /snapshots
    type: DirectoryOrCreate

CodePudding user response:

Your volume configuration:

      - hostPath:
        path: /snapshots
        type: DirectoryOrCreate
        name: snapshots

Example from Kubernetes Volume documentation:

    - name: test-volume
      hostPath:
        # directory location on host
        path: /data
        # this field is optional
        type: Directory```

There's an extra layer of indentation under `hostPath`
  • Related