Home > OS >  Copying code using Dockerfile or mounting volume using docker-compose
Copying code using Dockerfile or mounting volume using docker-compose

Time:04-04

I am following the official tutorial on the Docker website

The docker file is

FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/

The docker-compose is:

  web:
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code

I do not understand why in the Dockerfile they are copying the code COPY . /code/ , but then again mounting it in the docker-compose - .:/code ? Is it not enough if I either copy or mount?

CodePudding user response:

Both the volumes: and command: in the docker-compose.yml file are unnecessary and should be removed. The code and the default CMD to run should be included in the Dockerfile.

When you're setting up the Docker environment, imagine that you're handed root access to a brand-new virtual machine with nothing installed on it but Docker. The ideal case is being able to docker run your-image, as a single command, pulling it from some registry, with as few additional options as possible. When you run the image you shouldn't need to separately supply its source code or the command to run, these should usually be built into the image.

In most cases you should be able to build a Compose setup with fairly few options. Each service needs an image: or build: (both, if you're planning to push the image), often environment:, ports:, and depends_on: (being aware of the limitations of the latter option), and your database container(s) will need volumes: for their persistent state. That's usually it.

The one case where you do need to override command: in Compose is if you need to run a separate command on the same image and code base. In a Python context this often comes up to run a Celery worker next to a Django application.

Here's a complete example, with a database-backed Web application with an async worker. The Redis cache layer does not have persistence and no files are stored locally in any containers, except for the database storage. The one thing missing is the setup for the database credentials, which requires additional environment: variables. Note the lack of volumes: for code, the single command: override where required, and environment: variables providing host names.

version: '3.8'
services:
  app:
    build: .
    ports: ['8000:8000']
    environment:
      REDIS_HOST: redis
      PGHOST: db
  worker:
    build: .
    command: celery worker -A queue -l info
    environment:
      REDIS_HOST: redis
      PGHOST: db
  redis:
    image: redis:latest
  db:
    image: postgres:13
    volumes:
      - pgdata:/var/lib/postgresql/data
volumes:
  pgdata:

Where you do see volumes: overwriting the image's code like this, it's usually an attempt to avoid needing to rebuild the image when the code changes. In the Dockerfile you show, though, the rebuild is almost free assuming the requirements.txt file hasn't changed. It's also almost always possible to do your day-to-day development outside of Docker – for Python, in a virtual environment – and use the container setup for integration testing and deployment, and this will generally be easier than convincing your IDE that the language interpreter it needs is in a container.

Sometimes the Dockerfile does additional setup (changing line endings or permissions, rearranging files) and the volumes: mount will hide this. It means you're never actually running the code built into the image in development, so if the image setup is buggy in some way you won't see it. In short, it reintroduces the "works on my machine" problem that Docker generally tries to avoid.

CodePudding user response:

It used for saving the image after with the code.

When you use COPY it save it as part of the image.

While mounting is only while developing.

  • Related