Home > OS >  Container deployment with self-managed kubernetes in AWS
Container deployment with self-managed kubernetes in AWS

Time:05-11

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:

kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml

How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline? Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)

I am missing some best practices for this process.

CodePudding user response:

Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).

To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.

This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !

bguess

CodePudding user response:

you can also make use of argo cd its very easy to install and use compared to aws codepipeline. argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

  • Related