Home > Net >  Docker "artifact image" vs "services image" vs "single FROM image" vs
Docker "artifact image" vs "services image" vs "single FROM image" vs

Time:11-14

I'm trying to understand the pros and cons of these four methods of packaging an application using Docker after development:

  1. Use a very light-weight image (such as Alpine) as the base of the image containing the main artifact, then update the original docker compose file to use it along with other services when creating and deploying the final containers.

  2. Something else I can do is, first docker commit, then use the result image as the base image of my artifact image.

  3. One other method can be using a single FROM only, to base my image on one of the required services, and then use RUN commands to install the other required services as "Linux packages"(e.g. apt-get another-service) inside the container when it's run.

  4. Should I use multiple FROMs for those images? Wouldn't it be complicated and only needed in more complex projects? Also it sounds vague to decide in what order those FROMs need to be written if none of them seems to be more important than the others as much as my application is concerned.

In the development phase, I used a "docker compose file" to run multiple docker containers. Then I used these containers and developed a web application (accessing files on the host machine using a bind volume). Now I want to write a Dockerfile to create an image that will contain my application's artifact, plus those services present in the initial docker compose file.

CodePudding user response:

I'd suggest these rules of thumb:

  1. A container only runs one program. If you need multiple programs (or services) run multiple containers.
  2. An image contains the minimum necessary to run its application, and no more (and no less -- do not depend on bind mounts for the application to be functional).

I think these best match your first option. Your image is built FROM a language runtime, COPYs its code in, and does not include any other services. You can then use Compose or another orchestrator to run multiple containers in parallel.

Using Node as an example, a super-generic Dockerfile for almost any Node application could look like:

# Build the image FROM an appropriate language runtime
FROM node:16

# Install any OS-level packages, if necessary.
# RUN apt-get update \
#  && DEBIAN_FRONTEND=noninteractive \
#     apt-get install --no-install-recommends --assume-yes \
#       alphabetical \
#       order \
#       packages

# Set (and create) the application directory.
WORKDIR /app

# Install the application's library dependencies.
COPY package.json package-lock.json .
RUN npm ci

# Install the rest of the application.
COPY . .
# RUN npm build

# Set metadata for when the application is run.
EXPOSE 3000
CMD npm run start

A matching Compose setup that includes a PostgreSQL database could look like:

version: '3.8'
services:
  app:
    build: .
    ports: ['3000:3000']
    environment:
      PGHOST: db
  db:
    image: postgres:14
    volumes:
      - dbdata:/var/lib/postgresql/data
    # environment: { ... }
volumes:
  dbdata:

Do not try to (3) run multiple services in a container. This is complex to set up, it's harder to manage if one of the components fails, and it makes it difficult to scale the application under load (you can usually run multiple application containers against a single database).

Option (2) suggests doing setup interactively and then docker commit an image from it. You should almost never run docker commit, except maybe in an emergency when you haven't configured persistent storage on a running container; it's not part of your normal workflow at all. (Similarly, minimize use of docker exec and other interactive commands, since their work will be lost as soon as the container exits.) You mention docker save; that's only useful to move built images from one place to another in environments where you can't run a Docker registry.

Finally, option (4) discusses multi-stage builds. The most obvious use of these is to remove build tools from a final build; for example, in our Node example above, we could RUN npm run build, but then have a final stage, also FROM node, that NODE_ENV=production npm ci to skip the devDependencies from package.json, and COPY --from=build-stage the built application. This is also useful with compiled languages where a first stage contains the (very large) toolchain and the final stage only contains the compiled executable. This is largely orthogonal to the other parts of the question; you could update the Dockerfile I show above to use a multi-stage build without changing the Compose setup at all.

Do not bind-mount your application code into the container. This hides the work that the Dockerfile does, and it's possible the host filesystem will have a different layout from the image (possibly due to misconfiguration). It means you're "running in Docker", with the complexities that entails, but it's not actually the image you'll actually deploy. I'd recommend using a local development environment (try running docker-compose up -d db to get a database) and then using this Docker setup for final integration testing.

  • Related