I have a docker container created by the following Dockerfile:
ARG TAG=latest
FROM continuumio/miniconda3:${TAG}
ARG GROUP_ID=1000
ARG USER_ID=1000
ARG ORG=my-org
ARG USERNAME=user
ARG REPO=none
ARG COMMIT=none
ARG BRANCH=none
ARG MAKEAPI=True
RUN addgroup --gid $GROUP_ID $USERNAME
RUN adduser --uid $USER_ID --disabled-password --gecos "" $USERNAME --ingroup $USERNAME
COPY . /api_maker
RUN /opt/conda/bin/pip install pyyaml psutil packaging
RUN apt install -y openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/thekey"
RUN --mount=type=secret,id=thekey git clone [email protected]:$ORG/$REPO.git /repo
RUN /opt/conda/bin/python3 /api_maker/repo_setup.py $BRANCH $COMMIT
RUN /repo/root_script.sh
RUN chown -R $USERNAME:$USERNAME /api_maker
RUN chown -R $USERNAME:$USERNAME /repo
RUN mkdir -p /data
RUN chown -R $USERNAME:$USERNAME /data
RUN mkdir -p /working
RUN chown -R $USERNAME:$USERNAME /working
RUN mkdir -p /opt/conda/pkgs
RUN mkdir -p /opt/conda/envs
RUN chmod -R 777 /opt/conda
RUN touch /opt/conda/pkgs/urls.txt
USER $USERNAME
RUN /api_maker/user_env_setup.sh $MAKEAPI
CMD /repo/run_api.sh $@;
with the following run_api.sh
script:
#!/bin/bash
cd /repo
PROCESSES=${1:-9}
LOCAL_DOCKER_PORT=${2:-7001}
exec /opt/conda/envs/environment/bin/gunicorn --bind 0.0.0.0:$LOCAL_DOCKER_PORT --workers=$PROCESSES restful_api:app
My app contains some signal handling. If I manually send SIGTERM
to gunicorn
(either the worker or the parent process) from inside the container, my signal handling works properly. However, it does not work right when I run docker stop
on the container. How can I make my shell script properly forward the SIGTERM it is supposedly receiving?
CodePudding user response:
You need to make sure the main container process is your actual application, and not a shell wrapper.
As you have the CMD
currently, a shell invokes it. The argument list $@
will always be empty. The shell parses /repo/run_api.sh
and sees that it's followed by a semicolon, so it might need to do something else. So even though your script correctly ends with exec gunicorn ...
to hand off control directly to the other process, it's still running underneath a shell, and when you docker stop
the container, it goes to the shell wrapper.
The easiest way to avoid this shell is to use an exec form CMD
:
CMD ["/repo/run_api.sh"]
This will cause your script to run directly, without having a /bin/sh -c
wrapper invoking it, and when the script eventually exec
another process, that process becomes the main process and will receive the docker stop
signal.