Home > Software engineering >  Is it possible to share a volume with 2 docker containters?
Is it possible to share a volume with 2 docker containters?

Time:12-13

I can't run 2 containers whereas I can run each one them separately.

I have this 1st container/image related to this DockerFile

FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test1.py /app/container1/test1.py
WORKDIR /app/
CMD python3 container1/test1.py

I have this 2nd container/image related to this DockerFile

FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test2.py /app/container2/test2.py
WORKDIR /app/
CMD python3 container2/test2.py

No issues to create images:

docker image build ./authentif -t test1:latest
docker image build ./authoriz -t test2:latest

When I run the 1st container with this command:

docker container run -it --network my_network --name test1_container\
 --mount type=volume,src=my_volume,dst=/app -e LOG=1\
 --rm test1:latest

it works.

And If i want to check my volume:

sudo ls /var/lib/docker/volumes/my_volume/_data

I can see data in my volume

However when I want run the 2nd container:

docker container run -it --network my_network --name test2_container\
 --mount type=volume,src=my_volume,dst=/app -e LOG=1\
 --rm test2:latest

I have this error:

python3: can't open file '/app/container2/test2.py': [Errno 2] No such file or directory

If i delete everything and start over : if I start running the 2nd container it works but then id I want to run the 1st container, i have the error again.

why is that?

in my container1, let's assume that my script python writes data in a file, for example :

import os
print("test111111111")

if os.environ.get('LOG') == "1":
    print("1111111")
    with open('record.log', 'a') as file:
        file.write("file11111")

CodePudding user response:

I can't reproduce your issue. When I start 2 containers using

docker run -d --rm -v myvolume:/app --name container1 debian tail -f /dev/null
docker run -d --rm -v myvolume:/app --name container2 debian tail -f /dev/null

and then do

docker exec container1 /bin/sh -c 'echo hello > /app/hello.txt'
docker exec container2 cat /app/hello.txt

it prints out 'hello' as expected.

CodePudding user response:

You are mounting the volume over /app, the directory that contains your application code. That hides the code and replaces it with something else.

The absolute best approach here, if you can handle it, is to avoid sharing files at all. Keep the data somewhere like a relational database (which may be stateful). Don't mount anything on to your containers. Especially if you're looking forward to a clustered environment like Kubernetes, sharing files can be surprisingly tricky.

If you can't get rid of the shared directory, then put it somewhere other than /app. You might need to configure the alternate directory using an environment variable.

docker container run ... \
 --mount type=volume,src=my_volume,dst=/data \  # /data, not /app
 ...

What's actually happening in your setup is that Docker has a feature to copy the contents of the image into an empty named volume on first use. This only happens if the volume is completely empty, this only happens with a named Docker volume and not bind mounts, and this doesn't happen on other container systems like Kubernetes. (I'd discourage actually relying on this behavior.)

So when you run the first container, it sees that my_volume is empty and copies the test1 image into it; then the container sees the code it expects it in /app and it apparently works fine. The second container sees my_volume is non-empty, and so the volume contents (with the first image's code) hide what was in the image (the second image's code). I'd expect, if you started from scratch, whichever of the two containers you started first would work, but not the other, and if you change the code in the working image, a new container won't see that change (it will use the code out of the volume).

  • Related