Setup: Linux VM where Pod (containing 3 containers) is started. Only 1 of the containers needs the NFS mount to the remote NFS server. This "app" container is based Alpine linux.
Remote NFS server is up & running. If I create a separate yaml file for persistent volume with that server info - it's up & available.
In my pod yaml file I define Persistent Volume (with that remote NFS server info), Persistent Volume Claim and associate my "app" container's volume with that claim.
Everything works as a charm if on the hosting linux VM I install the NFS library, like:
sudo apt install nfs-common
.
(That's why I don't share my kubernetes yaml file. Looks like problem is not there.)
But that's a development environment. I'm not sure how/where those containers would be used in production. For example they would be used in AWS EKS.
I hoped to install something like
apk add --no-cache nfs-utils
in the "app" container's Dockerfile.
I.e. on container level, not on a pod level - could it work?
So far getting the pod initialization error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 22s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 20s default-scheduler Successfully assigned default/delphix-masking-0 to masking-kubernetes
Warning FailedMount 4s (x6 over 20s) kubelet MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o hard,nfsvers=4.1 maxTestNfs1.dlpxdc.co:/var/tmp/masking-mount /var/snap/microk8s/common/var/lib/kubelet/pods/2e6b7aeb-5d0d-4002-abba-88de032c12dc/volumes/kubernetes.io~nfs/nfs-pv
Output: mount: /var/snap/microk8s/common/var/lib/kubelet/pods/2e6b7aeb-5d0d-4002-abba-88de032c12dc/volumes/kubernetes.io~nfs/nfs-pv: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
And the process is stuck in that step forever. Looks like it happens before even trying to initialize containers. So I wonder if approach of enabling NFS-client on the container's level is valid. Thanks in ahead for any insights!
CodePudding user response:
TL;DR - no, it is not best practice (and not the right way) to mount NFS volumes from inside the container. There is no use-case for it. And it is a huge security risk as well (allowing applications direct access to cluster-wide storage without any security control).
It appears your objective is to provide storage to your container that is backed by NFS, right? Then doing it by creating a PersistentVolume
and then using a PersistentVolumeClaim
to attach it to your Pod
is the correct approach. That you're already doing.
No, you don't have to worry about how will the storage be provided to the container
that is because due to the way k8s runs applications, certain conditions MUST be met before a Pod
can be scheduled on a Node
. One of those conditions is, the volume
s the pod is mounting MUST be available. If a volume doesnt exist, and you mount it in a Pod
, that Pod
will never get scheduled and will possibly be stuck in Pending
state. That's what you see as the error as well.
You don't have to worry about the NFS connectivity, because in this case, the Kubernetes PersistentVolume
resource you created will technically act like an NFS client for your NFS server. This provides a uniform storage interface (applications don't have to care where the volume is coming from, the application code will be independent of storage type) as well as better security and permission control.
Another note, when dealing with Kubernetes, it is recommended to consider Pod
as your 'smallest unit' of infrastructure and not container
. So, it is the recommended way to only use 1 application container per pod for simplicity of design and achieving the micro-service architecture in true sense.
CodePudding user response:
I hoped to install something like apk add --no-cache nfs-utils in the "app" container's Dockerfile. I.e. on container level, not on a pod level - could it work?
Yes, this could work. This is normally what you would do if you have no control to the node (eg. you can't be sure if the host is ready for NFS calls). You need to ensure your pod can reach out to the NFS server and in between all required ports are opened. You also needs to ensure required NFS program (eg. rpcbind) is started before your own program in the container.
...For example they would be used in AWS EKS.
EKS optimized AMI come with NFS supports, you can leverage K8S PV/PVC support using this image for your worker node, there's no need to initialize NFS client support in your container.