I am building a Docker image and need to run pip install vs a private PyPi with credentials. What is the best way to secure the credentials? Using various file configuration options (pip.conf, requirements.txt, .netrc) is still a vulnerability even if I delete them because they can be recovered. Environment variables are also visible. What's the most secure approach?
CodePudding user response:
I understand that you want to provide those credentials on build time and get rid of them afterwards.
Well, the most secure way to handle this with pip
would be by using a multi-stage build process.
First, you would declare an initial build-image
with the file configurations and any dependency that could be needed to download/compile your desired packages; don't worry about the possibility of recovering those files, since you will only use them for the build process.
Afterwards define your final image without the build dependencies and copy only the source code you want to run from your project and the dependencies from the build image. The resultant image won't have the configuration files and it's impossible to recover them, since they never were there.
FROM python:3.10-slim as build
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
build-essential gcc
WORKDIR /usr/app
RUN python -m -venv /usr/app/venv
ENV PATH="/usr/app/venv/bin:$PATH"
[HERE YOU COPY YOUR CONFIGURATION FILES WITH CREDENTIALS]
COPY requirements.txt
RUN pip install -r requirements
FROM python:3.10-slim
WORKDIR /usr/app
COPY --from=build /usr/app/venv ./venv
[HERE YOU COPY YOUR SOURCE CODE INTO YOUR CURRENT WORKDIR]
ENV PATH="/usr/app/venv/bin:$PATH"
ENTRYPOINT ["python", "whatever.py"]