I have a google cloud trigger that is connected to my github repository that builds docker containers. But when I update my code it takes a really long time to build, so I want it to cache it by changing the google trigger configuration to Cloud Build configuration file
from Dockerfile
which was set previously (By setting it to dockerfile it takes really a long time like mentioned).
My cloudbuild.yaml looks like this:
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/project/github.com/user/repo_name:$COMMIT_SHA
- --cache=true
- --cache-ttl=6h
- --dockerfile=Dockerfile
timeout: 7200s
But when I run it like this it always starts from scratch and even though it builds it it's not showing up under the images section of the container registry where my builds are usually registered to and where I want them to be.
How can I get my kaniko to cache my builds so it won't take much each time I commit to my github?
Using kubernetes and docker for the build.
CodePudding user response:
If you using the Docker image build you can use the --cache-from
The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the --cache-from argument in your build config file, which will instruct Docker to build using that image as a cache source.
YAML example
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker pull gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest || exit 0']
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
'--cache-from', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
'.'
]
images: ['gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest']
Google's suggested best practice: https://cloud.google.com/build/docs/optimize-builds/speeding-up-builds
Update
Destination argument to add where you want to push your image:
"--destination=gcr.io/$PROJECT_ID/hello:$COMMIT_SHA"