Home > other >  Pods distribution across nodes
Pods distribution across nodes

Time:12-03

I am having a question regarding the distribution of pods across nodes. What I like to have is given a 3 node cluster, I want to deploy a pod with 2 replicas while making sure that these replicas are deployed on different nodes in order to achieve high availability. What are the options other than using nodeAffinity?

CodePudding user response:

First of all, node affinity allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node. Therefore it does not guarantee that each replica will be deployed on a different node or that these nodes will be distributed uniformly across the all of the nodes. However a valid solution will be using Pod Topology Spread Constraints.

Pod Topology Spread Constraints rely on node labels to identify the topology domain(s) that each Node is in, and then using these labels to match with the pods having the same labels. You can define one or multiple topologySpreadConstraint to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster.

kind: Pod
apiVersion: v1
metadata:
  name: mypod
  labels:
    node: node1
spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        node: node1
  containers:
  - name: myapp
    image: image_name

maxSkew: 1 describes the degree to which Pods may be unevenly distributed. It must be greater than zero. Its semantics differs according to the value of whenUnsatisfiable.

topologyKey: zone implies the even distribution will only be applied to the nodes which have label pair "zone:" present.

whenUnsatisfiable: DoNotSchedule tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint.

labelSelector is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain.

You can find out more on Pod Topology Spread Constraints in this documentation: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/

CodePudding user response:

If you need the guarantee that no matter what, they won't end up on the same node, you will have to use AntiAffinity. You can find a nice example here.

In an other hand if you need a more lexible solution, you can use spread constraints and even combine it with AntiAffinity.

  • Related