Home > OS >  Why a pod didn't trigger scale-up when working with EKS nodegroup?
Why a pod didn't trigger scale-up when working with EKS nodegroup?

Time:09-21

I deployed K8S cluster on EKS nodegroup and deployed auto scalar based on this doc https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html

The node size is t3.large which is 2 cpu and 8G memory and the size is:

desired_size = 1
    max_size     = 3
    min_size     = 1

when I deploy a Elasticsearch pod on this cluster:

containers:
        - name: es
          image: elasticsearch:7.10.1
          resources:
            requests:
              cpu: 2
              memory: 8Gi

got this error:

Events:
  Type     Reason             Age                 From                Message
  ----     ------             ----                ----                -------
  Warning  FailedScheduling   57s (x11 over 11m)  default-scheduler   0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.
  Normal   NotTriggerScaleUp  49s (x54 over 10m)  cluster-autoscaler  pod didn't trigger scale-up: 1 Insufficient cpu, 1 Insufficient memory

I wonder why the scaler is not triggered.

One thing I can think of is the pod requested resource meet the node's maximum capacity. Is this the reason it can't scale up? Does the scale work to combine multiple small nodes resources to big one? like I spin up 3 small nodes which can be consumed by one pod?

CodePudding user response:

Instance type is not the actual allocatable capacity, check with:

kubectl describe node <name> | grep Allocatable -A 7

Update: You can add additional node group with ASG that uses larger instance type for autoscaler to select the right size. Ensure that your ASG are tagged so that autoscaler can automatically discover these ASG(s).

k8s.io/cluster-autoscaler/enabled
k8s.io/cluster-autoscaler/<cluster-name>
  • Related