I am in the learning phase of Kubernetes and want to set up the CI/Cd pipeline for my project. I am using google cloud and have the following elements are ready
- 3 Node cluster is deployed on google cloud
- Github has been integrated with google cloud build to trigger the build.
- I am using
helm
to maintain my K8s templates. cloudbuilld.yaml
is developed to compile the docker image and push it to google container registry.
I am stuck at - Once my cloudbuild.yaml is done with building the docker image and pushed it to the registry, how do I use helm to upgrade the chart?
Here is my sample cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ["build", "-t", "gcr.io/kubernetes-amit-test/github.com/0xvoila/apache/phoenix:$SHORT_SHA", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/kubernetes-amit-test/github.com/0xvoila/apache/phoenix:$SHORT_SHA"]
- name: "alpine/helm:latest". --- It is not working
args: ["helm","upgrade","mychart","image", "gcr.io/kubernetes-amit-test/github.com/0xvoila/apache/phoenix:$SHORT_SHA"]
My Question is
- How can I use helm to upgrade the latest charts.
- As I am new to Kubernetes, it is even the best practice for K8s deployment? Do people even use helm?
CodePudding user response:
How can I use helm to upgrade the latest charts.
There is already default helm exist : gcr.io/$PROJECT_ID/cloud-builders-helm
- name: 'gcr.io/$PROJECT_ID/cloud-builders-helm'
args: ['upgrade', '--install', 'filebeat', '--namespace', 'filebeat', 'stable/filebeat']
For managing chart version you should check the : https://cloud.google.com/artifact-registry/docs/helm/manage-charts
Helm cloud builder Github
As I am new to Kubernetes, it is even the best practice for K8s deployment? Do people even use helm?
Helm is the best way to manage it instead of using any other.
i would suggest checking out the helm atomic
helm upgrade --install --atomic
which will also auto rollback deployment if it's failing in K8s.
--atomic if set, upgrade process rolls back changes made in case of failed upgrade. The --wait flag will be set automatically if --atomic is used
Extra :
Instead of fixing the GCR name, you can also use variables this template will work across the branches of across repo also.
- id: 'build test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
Update :
Adding GKE cluster details to Cloud build
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
i am using the kubectl apply but you can add these environment variables to your helm step.
Full file
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: standard-cluster-1 # default value
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'