I'm trying to create a persistent storage to share with all of my application in the K8s cluster.
storageClass.yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
persistentVolume.yaml file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-local-pv
spec:
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: my-local-storage
local:
path: /base-xapp/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- juniper-ric
persistentVolumeClaim.yaml file:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
and finally, this is the deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}-deployment
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.appName }}
template:
metadata:
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.appName }}
image: "{{ .Values.image }}:{{ .Values.tag }}"
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
mountPath: {{ .Values.appName }}/data
envFrom:
- configMapRef:
name: {{ .Values.appName }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.appName }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
persistentVolumeClaim:
claimName: my-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.appName }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.appName }}
type : NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
When i deploy the container the container status equals Pending
base-xapp-deployment-6799d6cbf6-lgjks 0/1 Pending 0 3m25s
this is the output of the describe:
Name: base-xapp-deployment-6799d6cbf6-lgjks
Namespace: near-rt-ric
Priority: 0
Node: <none>
Labels: app=base-xapp
pod-template-hash=6799d6cbf6
xappRelease=base-xapp
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/base-xapp-deployment-6799d6cbf6
Containers:
base-xapp:
Image: base-xapp:0.1.0
Ports: 4565/TCP, 4561/TCP
Host Ports: 0/TCP, 0/TCP
Environment Variables from:
base-xapp-configmap ConfigMap Optional: false
Environment: <none>
Mounts:
/rmr_route from app-cfg (rw,path="rmr_route")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxmwm (ro)
/vlevel from app-cfg (rw,path="vlevel")
base-xapp/data from base-xapp-persistent-storage (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
app-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: base-xapp-configmap
Optional: false
base-xapp-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-claim
ReadOnly: false
kube-api-access-rxmwm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10s (x6 over 4m22s) default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "my-claim" not found.
this is the output of kubectl resources:
get pv:
dan@linux$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-local-pv 50Mi RWO Retain Available my-local-storage 6m2s
get pvc:
dan@linux$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-claim Pending my-local-storage 36m
CodePudding user response:
You're missing spec.volumeName
in your PVC manifest.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
volumeName: my-local-pv # this line was missing
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
CodePudding user response:
I can see your deployment have namespace near-rt-ric
.
But your PVC doesn't have a namespace, it probable placed in default
namespace
Use this command to check kubectl get pvc -A