I started using Docker for a personal project and realized that this increases my development time to an unnacceptable amount. I would rather spin up an LXC instance if I had to rebuild images for every code change.
I heard there was a way to mount this but wasn't sure exactly how one would go about it. I also have a docker compose yaml file but I think you mount a volume or something in the Dockerfile? The goal is to have code changes not need to rebuild a container image.
FROM ubuntu:18.04
EXPOSE 5000
# update apt
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends build-essential gcc wget
# pip installs
FROM python:3.10
# TA-Lib
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
ADD app.py /
RUN pip install --upgrade pip setuptools
RUN pip install pymysql
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
RUN pip freeze >> /tmp/requirement.txt
COPY . /tmp
CMD ["python", "/tmp/app.py"]
RUN chmod x ./tmp/start.sh
RUN ./tmp/start.sh
version: '3.8'
services:
db:
image: mysql:8.0.28
command: '--default-authentication-plugin=mysql_native_password'
restart: always
environment:
- MYSQL_DATABASE=#########
- MYSQL_ROOT_PASSWORD=####
# client:
# build: client
# ports: [3000]
# restart: always
server:
build: server
ports: [5000]
restart: always
CodePudding user response:
Here's what I would suggest to make dev builds faster:
Bind mount code into the container
A bind mount is a directory shared between the container and the host. Here's the syntax for it:
version: '3.8'
services:
# ... other services ...
server:
build: server
ports: [5000]
restart: always
volumes:
# Map the server directory in into the container at /code
- ./server:/code
The first part of the mount, ./server
is relative to the directory that the docker-compose.yml file is in. If the server directory and the docker-compose.yml file are in different directories, you'll need to change this part.
After that, you'd remove the part of the Dockerfile which copies code into the container. Something like this:
# pip installs
FROM python:3.10
# TA-Lib
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
RUN pip install --upgrade pip setuptools
RUN pip install pymysql
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
CMD ["python", "/code/app.py"]
The advantage of this approach is that when you hit 'save' in your editor, the change will be immediately propagated into the container, without requiring a rebuild.
Note about production builds: I don't recommend bind mounts when running your production server. In that case, I would recommend copying your code into the container instead of using a bind mount. This makes it easier to upgrade a running server. I typically write two Dockerfiles and two docker-compose.yml files: one set for production, and one set for development.
Install dependencies before copying code into container
One part of your Dockerfile is causing most of the slowness. It's this part:
ADD app.py /
# ... snip two lines ...
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
This defeats Docker's layer caching. Docker is capable of caching layers, and using the cache if nothing in that layer has changed. However, if a layer changes, any layer after that change will be rebuilt. This means that changing app.py
will cause the pip install --requirement /tmp/requirements.txt
line to run again.
To make use of caching, you should follow the rule that the least-frequently changing file goes in first, and most-frequently changing file goes last. Since you change the code in your project more often than you change which dependencies you're using, that means you should copy app.py in after you've installed the dependencies.
The Dockerfile would change like this:
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
# After installing dependencies
ADD app.py /
In my projects, I find that rebuilding a container without changing dependencies takes about a second, even if I'm not using the bind-mount trick.
For more information, see the documentation on layer caching.
Remove unused stage
You have two stages in your Dockerfile:
FROM ubuntu:18.04
# ... snip ...
FROM python:3.10
The FROM
command means that you are throwing out everything in the image and starting from a new base image. This means that everything in between these two lines is not really doing anything. To fix this, remove everything before the second FROM
statement.
Why would you use multistage builds? Sometimes it's useful to install a compiler, compile something, then copy it into a fresh image. Example.
Merge install and remove step
If you want to remove a file, you should do it in the same layer where you created the file. The reason for this is that deleting a file in a previous layer does not fully remove the file: the file still takes up space in the image. A tool like dive can show you files which are having this problem.
Here's how I would suggest changing this section:
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
Merge the rm
into the previous step:
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install && \
cd .. && \
rm -R ta-lib ta-lib-0.4.0-src.tar.gz