Home > database >  Best Practise for docker intercontainer communication
Best Practise for docker intercontainer communication

Time:08-30

I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted. Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).

What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.

Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.

Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.

Thanks in advance

CodePudding user response:

WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.

Of the other options you suggest:

  • Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.

  • ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.

  • Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.

Is there a best practise to execute commands from container A in contianer B?

"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.

CodePudding user response:

One solution would be to mount one filesystem readonly on one container and read-write on the other container.

See this answer: Docker, mount volumes as readonly

  • Related