I created a simple application that just sends me an email about the weather for the location I choose. It works as expected locally, but now I need to figure out how to pass my API-Key and email log-in credentials safely. At the moment, I have them in a .env file and I don't want it exposed on GitHub. Since I'm using Azure DevOps as my CI/CD pipeline, do I pass those in there? I'm having my pipeline build a docker image and not sure how to pass a variable into a dockerfile build. Here's my dockerfile:
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Avoid cache purge by adding requirements first
COPY requirements.txt ${LAMBDA_TASK_ROOT}
RUN pip install --no-cache-dir -r requirements.txt
ARG WEATHER_API_KEY
ARG EMAIL_USER
ARG EMAIL_PASSWORD
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_DEFAULT_REGION
ENV WEATHER_API_KEY $WEATHER_API_KEY
ENV EMAIL_USER $EMAIL_USER
ENV EMAIL_PASSWORD $EMAIL_PASSWORD
ENV AWS_ACCESS_KEY_ID $AWS_ACCESS_KEY_ID
ENV AWS_SECRET_ACCESS_KEY $AWS_SECRET_ACCESS_KEY
ENV AWS_DEFAULT_REGION $AWS_DEFAULT_REGION
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
CodePudding user response:
You should neither source control secrets nor add them to the image. Both ways are insecure.
A common practice is to read the env variables at runtime. You provide those when running your container.
export API_KEY="$(some-cli get-key mykey)"
docker run -e API_KEY myimage
Or set it directly, although then you have it in the history.
docker run -e API_KEY="foobar" myimage
Then your code needs to read the variables from os.environ
rather than a .env file.
Depending on how you run your container this may look different but the general idea remains the same.