I'm having the issue (which seems to be common) that I'm dockerizing applications that run on one machine, and these applications, now, need to run in different containers (because that's the docker paradigm and how things should be done). Currently I'm having issues with postfix and dovecot... people have found this too painful that there are tons of containers running both dovecot and postfix in one container, and I'm doing my best to do this right, but the lack of inet protocol examples (over tcp) is just too painful to continue with this. Leave alone bad logging and things that just don't work. I digress.
The question
Is it correct to have shared docker volumes that have socket files shared across different containers, and expect them to communicate correctly? Are there limitations that I have to be aware of?
Bonus: Out of curiosity, can this be extended to virtual machines?
CodePudding user response:
It is generally not recommended to share socket files across different Docker containers, as this can lead to issues with communication and synchronization between the containers.
Each Docker container runs in its own isolated environment, and as such, it is not guaranteed that two containers will be able to communicate with each other using socket files. This is because the socket files are specific to each container and are not shared between them.
If you need to communicate between two containers, it is better to use the built-in networking capabilities of Docker, such as network bridges or overlay networks. This will allow the containers to communicate with each other using the network, rather than relying on shared socket files.
Regarding your curiosity about extending this to virtual machines, it is also not recommended to share socket files between different virtual machines. Each virtual machine runs in its own isolated environment, just like Docker containers, and as such, communication between virtual machines should also be done using networking capabilities.
In summary, it is not recommended to share socket files between different Docker containers or virtual machines, as this can lead to communication issues. Instead, use the built-in networking capabilities of Docker or your virtualization software to enable communication between containers or virtual machines.
CodePudding user response:
A Unix socket can't cross VM or physical-host boundaries. If you're thinking about ever deploying this setup in a multi-host setup like Kubernetes, Docker Swarm, or even just having containers running on multiple hosts, you'll need to use some TCP-based setup instead. (Sharing files in these environments is tricky; sharing a Unix socket actually won't work.)
If you're using Docker Desktop, also remember that runs a hidden Linux virtual machine, even on native Linux. That may limit your options. There are other setups that more directly use a VM; my day-to-day Docker turns out to be Minikube, for example, which runs a single-node Kubernetes cluster with a Docker daemon in a VM.
I'd expect sharing a Unix socket to work only if the two containers are on the same physical system, and inside the same VM if appropriate, and with the same storage mounted into both (not necessarily in the same place). I'd expect putting the socket on a named Docker volume mounted into both containers to work. I'd probably expect a bind-mounted host directory to work only on a native Linux system not running Docker Desktop.