I am trying to run a go-ethereum node on AWS EKS, for that i have used statefulsets with below configuration. statefulset.yaml file
Runningkubectl apply -f statefulset.yaml
creates 2 pods out of which 1 is running and 1 is in CrashLoopBackOff state.
Pods status
After checking the logs for second pod the error I am getting is Fatal: Failed to create the protocol stack: datadir already used by another process
.
Error logs i am getting
The problem is mainly due to the pods using the same directory to write(geth data) on the persistant volume(i.e the pods are writing to '/data'). If I use a subpath expression and mount the pod's directory to a sub-directory with pod name(for eg: '/data/geth-0') it works fine. statefulset.yaml with volume mounting to a sub directory with podname But my requirement is that all the three pod's data is written at '/data' directory. Below is my volume config file. volume configuration
CodePudding user response:
The same directory cannot be reused by multiple instances of go-ethereum, so you have the following options:
Use the same persistent volume for each pod and use a subdirectory for each pod
Use a separate persistent volume for each pod, then each can use the same
/data
path
CodePudding user response:
You need to dynamically provision the access point for each of your stateful pod. First create an EFS storage class that support dynamic provision:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-dyn-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
parameters:
provisioningMode: efs-ap
directoryPerms: "700"
fileSystemId: <get the ID from the EFS console>
Update your spec to support claim template:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: geth
...
spec:
...
template:
...
spec:
containers:
- name: geth
...
volumeMounts:
- name: geth
mountPath: /data
...
volumeClaimTemplates:
- metadata:
name: geth
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-dyn-sc
resources:
requests:
storage: 5Gi
All pods now write to their own /data.