I've read mutliple resources about how to make source code available to docker containers in dev and in prod.
For me, it still is not really clear, when and how to use what. As I've read differing opinions on how to do this.
What my goal is:
In development I want to use bind mounts to directly see the source code changes reflected in the container. For production, I want the source code to be immutable.
I read that I can use docker volumes for that. (I read I should not COPY or ADD the source into the container to keep the image small) But to me it looks as if volumes and bind mounts are almost equal.
The source code directory has to be accessible by two different containers. One of them does write into a cache directory from time to time.
Questions:
Is this the correct way to go? (bind-mounts when developing and volumes when in PROD?)
How can I have one docker-compose which depending on the environment either makes a bind-mount or a volume?
How can i have a staged build which builds the code, puts it into a volume which then can be used by two containers?
CodePudding user response:
It depends on the Programming Language you use.
For really interpreted languages the code is evaluated every time. In that case you could use Volume-mapping.
But there are not that many really interpreted languages around.
A language like Python is compiling into *.pyc files so volume mapping is not advisable.
If you want to try volume-mapping *.pyc files, go ahead, and have fun with a whole lot of weird effects :-)
My advice would be to build the container with the code. And if you want the fast watch-functions on development, try to run the application outside a container on development and only use containers for Stage and Production. (If that is even possible with your application)
CodePudding user response:
You should keep the application code in the image (assuming it's an interpreted language).
In particular, Docker named volumes are especially hard to manage. You will wind up with a deployment sequence that involves starting a temporary container to get access to the volume, then copying data into it, then restarting your containers, and you'll lose a lot of Docker's capabilities around for example rolling back containers if a deployment fails. You're not even saving any space here, since the code still exists on the target system in the volume.
Conversely, it's very easy to run two containers off the same image, and you don't have to do anything special for that.
# docker-compose.yml
version: '3.8'
services:
first:
image: registry.example.com/application:${APPLICATION_TAG:-latest}
ports:
- '8080:8080'
second:
image: registry.example.com/application:${APPLICATION_TAG:-latest}
environment:
FIRST_URL: 'http://first:8080/'
# no volumes: on either container
Especially for production use I've added the image tag as an environment variable reference, so you can run
APPLICATION_TAG=20220502 docker-compose up -d
curl http://localhost:8080
...
# nope, it's not working, roll back
APPLICATION_TAG=20220429 docker-compose up -d
If you want to replace the application code with something totally different, maybe what a developer has on their system, you can add a docker-compose.override.yml
file and Compose will merge their settings.
# docker-compose.override.yml
version: '3.8'
services:
first:
build: .
volumes:
- .:/app
second:
build: .
volumes:
- .:/app
# no ports:, environment:, etc. on either container
You may also find it easier to ignore Docker during development and use ordinary host tools. There are a number of potential problems with the bind-mount approach: if the Dockerfile does any processing at all this won't be present in the bind-mounted content, and particularly in Node the bind mount also hides the application's library tree (there is a popular hack to store the node_modules
directory in an anonymous volume). The bind mount can reintroduce the "works on my system" problem and I've seen a couple of examples of that go by on SO.