I have running multiple containers separately which connected to each other using defined network in docker-compose.yml and my application is running perfect, so I want to create only one image for those multiple containers for deploying to my private repository (image with tags), I want to know what is the best practice do do that.
docker-compose.yml
version: '3.1'
networks:
lemp:
services:
nginx:
build:
context: .
dockerfile: Dockerfile
target: webserver
container_name: webserver
volumes:
- ./src/app:/var/www/html/app
ports:
- "80:80"
networks:
- lemp
php:
build:
context: .
dockerfile: Dockerfile
target: app
container_name: app
volumes:
- ./src/app:/var/www/html/app
ports:
- "9000:9000"
networks:
- lemp
Dockerfile
FROM nginx:1.21.6-alpine AS webserver
COPY ./src/ ./var/www/html
COPY ./nginx/conf.d/app.conf /etc/nginx/conf.d/app.conf
EXPOSE 80 443
FROM php:7.4-fpm-alpine AS app
EXPOSE 9000
CodePudding user response:
You should plan to distribute your docker-compose.yml
file, or perhaps a simplified version of it, as the standard way to run your combined application. If it requires two images, you'll need to push the two images separately to your repository; don't try to combine them. Do make sure the images are self-contained so you don't need the source code separately from the images to run them.
The docker-compose.yml
file should roughly look like:
version: '3.8'
services:
nginx:
image: registry.example.com/nginx:${TAG:-latest}
ports:
- '80:80'
php:
image: registry.example.com/php:${TAG:-latest}
Calling out a couple of things here: I've removed the unnecessary networks:
declarations (Compose provides a default
network that works fine) and the unnecessary container_name:
declarations. I've put in an image:
line for each image in place of the build:
block, and use an environment variable to inject the image tag. For the php
container I've removed the ports:
declaration since you probably don't want that externally accessible. Finally, for both containers I've removed the volumes:
that override the image contents.
Next to this, put a docker-compose.override.yml
file. This is not something you'd distribute. It can say:
version: '3.8'
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
php:
build:
context: .
dockerfile: Dockerfile.php
ports:
- '9000:9000'
If you have both files, Compose merges their settings. So for a developer this adds in the ports:
to directly access the PHP-FPM service if required, and build:
blocks to explain how to build both images. Since the combined Compose configuration has both build:
and image:
, docker-compose build
will build images with the specified names tagged with your local registry name.
You should have a separate Dockerfile for each image you're building. The Nginx image resembles what you already have; for the PHP-FPM container you need to make sure you COPY
the code into the image.
# Dockerfile.nginx
FROM nginx:1.21.6-alpine
COPY ./src/ /var/www/html/
COPY ./nginx/conf.d/app.conf /etc/nginx/conf.d/app.conf
# Dockerfile.php
FROM php:7.4-fpm-alpine
COPY ./src/app/ /var/www/html/app/
Now you can build and run the application locally. Double-check that it works correctly, without volumes:
overwriting the image code.
docker-compose build
docker-compose up -d
curl http://localhost/
If this works, then you're set to distribute this. Pick a tag (a date stamp or the current source control ID are good choices), build the images, and push them to a Docker registry.
export TAG=20220418
docker-compose build
docker-compose push
Now you can copy only the docker-compose.yml
file, but none of the other files we've touched, to the remote system, or put it in a GitHub repository, or something else. On that system, set $TAG
to match, and run docker-compose up
as usual. Docker will automatically pull the images from the repository. Since the images are self-contained, the only thing you need is the docker-compose.yml
file.
scp docker-compose.yml there:
ssh root@there
export TAG=20220418
docker-compose up -d
CodePudding user response:
Unclear what you really need. You can publish the individual containers to your registry and provide a downloadable Compose file for anyone to use those containers together, which will pull each image, separately.
Otherwise, you would need to copy all relevant steps from one Dockerfile to the other. Note: If you are running unique entrypoint/commands (processes) in each Dockerfile, then this is considered bad practice.
UPDATE
Looking at your example, you could install php-fpm into the Nginx container, copy the PHP files, and just serve the static content from there. However, I would recommend keeping separate containers, for sure. Nginx should be replaceable as a reverse proxy.
Also, you don't have a correct multi-stage Dockerfile (using FROM twice doesn't merge anything), and your Compose file is just running the same image (context) twice on two different ports.