Home > Software design >  can't find python packages installed into customized docker image
can't find python packages installed into customized docker image

Time:04-30

I am creating a Docker container that runs Python 3.6.15 and the pip install function in my Dockerfile runs during the build process but when I try to execute functions within it after the build completes and I run it the 'installed' packages do not exist.

For more context, here is my Dockerfile. For clarity, I am building a Docker container that is being uploaded to AWS ECR to be used in a Lambda function but I don't think that's entirely relevant to this question (good for context though):

# Define function directory
ARG FUNCTION_DIR="/function"

FROM python:3.6 as build-image

# Install aws-lambda-cpp build dependencies
RUN apt-get clean && apt-get update && \
  apt-get install -y \
  g   \
  make \
  cmake \
  unzip \
  libcurl4-openssl-dev \
  ffmpeg

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}

# Copy function code
COPY . ${FUNCTION_DIR}

# Install the runtime interface client
RUN /usr/local/bin/python -m pip install \
        --target ${FUNCTION_DIR} \
        awslambdaric

# Install the runtime interface client
COPY requirements.txt /requirements.txt
RUN /usr/local/bin/python -m pip install -r requirements.txt

# Multi-stage build: grab a fresh copy of the base image
FROM python:3.6

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}

# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}

COPY entry-point.sh /entry_script.sh
ADD aws-lambda-rie /usr/local/bin/aws-lambda-rie
ENTRYPOINT [ "/entry_script.sh" ]

CMD [ "app.handler" ]

When I run my docker run command in Terminal, I can see that it is collecting and installing the packages from the requirements.txt file that is in my project's root. I then try to run an get an Import Module error. To troubleshoot, I ran some command line exec functions such as:

docker exec <container-id> bash -c "ls"  # This returns the folder structure which looks great

docker exec <container-id> bash -c "pip freeze". # This only returns 'pip', 'wheel' and some other basic Python modules

The only why I could solve it is that after I build and run it, I run this command:

docker exec <container-id> bash -c "/usr/local/bin/python -m pip install -r requirements.txt"

Which manually installs the modules and they then show up in the freeze command and I can execute the code. This is not ideal as I would like to have pip install run correctly during the build process so there are less steps in the future as I make changes to the code.

Any pointers as to where I am going wrong would be great, thank you!

CodePudding user response:

According to Docker Docs, multi-stage builds

With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.

So the 2nd from python:3.6 in the Dockerfile resets the image build, deleting the module installations.

The subsequent copy saves what was in /function (the aws module) but not the other modules saved to the system in the other pip install.

  • Related