I am trying to use an nfs share to use docker swarm with a single endpoint server for a drive, the NFS share its self does work as i can create files on it, however when trying to use it on a stack i get a bind source path error. my nfs share is set to /container on both machines so each machine can find it at the same location. here is what i have as a volume in my docker compose file:
volumes:
- /container/tdarr-server/server:/app/server
- /container/tdarr-server/configs:/app/configs
- /container/tdarr-server/logs:/app/logs
- /container/plex/media:/media
- /container/tdarr-server/transcode:/transcode
CodePudding user response:
There are two ways to do this:
On each server mount a nfs share. Assuming you have a nfs server sharing a volume "docker_volumes" you could mount that as "/mnt/volumes"
Then your stack file could look like this:
version: "3.9"
volumes:
prometheus:
driver: local
driver_opts:
o: bind
type: none
device: /mnt/volumes/prometheus-data
services:
prometheus:
image: prom/prometheus:latest
volumes:
- prometheus:/data
NB. Docker will NOT create missing volumes for you. Each missing directory (e.g. ./prometheus-data
) needs to be manually created on the nfs share before docker will start the service.
As an alternative you can - rather than pre-mounting the nfs volume in a defined location - provide docker with the nfs connection details so it can mount the nfs share on the fly:
volumes:
data:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
Again, if you provide a path into the nfs share as part of the device description, docker will not create it if it does not exist. The admin must pre-create any specific sub folders referenced before services or containers will be able to use the volume definition.