I have a piece of software that I am considering making Open Source. Before this, I ran most of the code and it's dependencies natively on the box that I was using (e.g. Gunicorn, RabbitMQ, Redis etc.). This was all installed with a simply install script.
I want to move to using Docker to handle this so we can ensure it runs the same on all machines.
In previous projects, I would have a Dockerfile for the app and then a Docker Compose to orchestrate any other dependency services like RabbitMQ or Redis, however I am trying to think of the best way to go about doing this from now on.
Is it better to have a single Dockerfile that installs everything in one container and is based on something like Ubuntu? i.e. it installs RabbitMQ and the app dependencies directly in the same container. Therefore when you want to installed it all you need to do is docker run <image>
.
Or is it better to stick with the Docker Compose approach to orchestrate things?
CodePudding user response:
Docker compose is the better way.
Docker containers are ment to only run one service (either Gunicor, RabbitMQ or Redis). As they state in the documentation
It is generally recommended that you separate areas of concern by using one service per container.
This is also usefull for people that will use your service (Gunicorn) and may already have an instance of Redis or RabbitMQ. This will allow them to reuse their resources and only run what they need from your project.