I have a question.
What's the best approach to building a Docker image using the pip artifact from the Artifact Registry?
I have a Cloud Build build that runs a Docker build, the only Dockerfile is pip install -r requirements.txt
, one of the dependencies of which is the library located in the Artifact Registry.
When executing a stage with the image gcr.io / cloud-builders / docker
, I get the error that my Artifact Registry is not accessible, which is quite logical. I have access only from the image performing the given step, not from the image that is being built in this step.
Any ideas?
Edit:
For now I will use Secret Manager to pass JSON key to my Dockerfile, but hope for better solution.
CodePudding user response:
I've not done this!
Assuming your Python artifacts are non-public, you'll need to enable authentication so that the Dockerfile
is able to authenticate to the Artifact Registry repo before it attempts to pip install
your private package(s).
NOTE If the artifacts are public, you'll be able to omit the authentication step but you'll still need to configure container's build environment (
.pypirc
) so that it can locate your repo.
Quickly reviewing the docs, Google provides a helper (gcloud artifacts print-settings python
) to provide the .pypirc
configuration.
You'll likely want to run the gcloud
command in a step prior to the docker build
to avoid having to add gcloud
to the Dockerfile
and then use either the environment or a /workspace
file to convey the .pypirc
content into the docker build
step (either as a --build-arg
or --volume
).
CodePudding user response:
When you use Cloud Build, you can forward the metadata server access through the Docker build process. It's documented, but absolutely not clear (personally, the first time I made a mail to Cloud Build PM to ask him, and he send me the documentation link.)
Now, your docker build can access the metadata server and be authenticated with the Cloud Build runtime service account. It should make your process easiest.