Home > Mobile >  How to mount a single volume to multiple /var/lib/docker simultaneously?
How to mount a single volume to multiple /var/lib/docker simultaneously?

Time:12-07

Is is possible to share a single docker volume to multiple docker containers with /var/lib/docker destination?

A minimal reproducible example would be like below:

$ docker volume create --name lib
$ docker run --privileged -v lib:/var/lib/docker --name c1 -d docker:dind
$ docker run --privileged -v lib:/var/lib/docker --name c2 -d docker:dind

I want to work with docker inside c1 and c2 containers simultaneously. But if you wait a moment, you'll see it's not possible and the second container (c2) stops. I've checked the error logs:

$ docker logs -f c2
.
.
.
failed to start containerd: timeout waiting for containerd to start

And, I can not make multiple volumes; Because the storage is limited and the size of images are heavy.

UPDATE: Maybe I'm facing with XY Problem! Actually I want to have my images shared. I want all of my Docker Images inside my host machine, go into all DinD containers ALSO the containers should be able to create a new Docker Image and this new image should be accessible from other containers at the same time.

CodePudding user response:

On the title of the question, yes, multiple containers can mount the same volume. However, your containers are each docker engines, and the second engine is failing to start because there's already a running docker engine on the /var/lib/docker directory. This isn't a volume mounting issue so much as a docker engine design challenge.

Given your requirements, a container image database from the host engine, shared with various DinD instances, while not sharing the docker engine of the host itself (via a docker.sock or mTLS), I don't believe there's a good answer. You're left with two options:

  1. Run your own local registry server. This is keep the layers from being sent outside your network, and could even be on the same host. However, the layers will be copied for each engine, and you'll need to manage GC policies on that registry. This gives you the desired isolation without the desired deduplication of image layers.

  2. Share the docker.sock between the host and trusted containers. The containers would then have direct access to the host engine, effectively root on the host (unless you have setup the engine as rootless), so only do this in environments where you trust it. This would give you the layer deduplication, but none of the isolation.

The reason it's difficult is docker is designed to manage it's own copy of /var/lib/docker, so all the state can be tracked in memory and periodically pushed out as json metadata files on disk to handle restarts. Mutexes are within the one process, and it doesn't need to worry about multiple writers modifying layers, or a reader running while a writer is still creating a layer.

CodePudding user response:

Take a look at this Document: https://docs.docker.com/storage/bind-mounts/

  • Related