I'm having to build a demo Kubernetes cluster in AWS using Kubeadm.
Unfortunately, for several reasons, Kops, and EKS are out of the question in my current environment.
How do I deal with things such as auto-scaling and auto joining worker nodes back to the master if they get terminated for any reason? This is my main concern.
I've done this with Kops in the past and it's relatively straightforward, but I'm not sure how to manage using Kubeadm.
CodePudding user response:
If you're using Ansible, you can set up your launch configuration to pull a git repo, and run a playbook to extract the join token from the Master and run on the worker nodes.
CodePudding user response:
Cluster Autoscaler is what you are looking for to achieve it.
Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true: -there are pods that failed to run in the cluster due to insufficient resources. -there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.
This tool will create and remove instances in ASG.
Cluster Autoscaler requires the ability to examine and modify EC2 Auto Scaling Groups.
You will only need add a template with a script for new created instances to add them to your cluster.
If you use kubeadm to provision your cluster, it is up to you to automatically execute
kubeadm join
at boot time via some script
If you have any questions about this tool, everything you can find in FAQ
You can check documentation how to set up it on AWS
On AWS, Cluster Autoscaler utilizes Amazon EC2 Auto Scaling Groups to manage node groups. Cluster Autoscaler typically runs as a
Deployment
in your cluster.