Home > Net >  Node Allocation on kubernetes nodes
Node Allocation on kubernetes nodes

Time:08-08

I am managing a Kubernetes cluster with 10 nodes(On-prem) and the node's configuration is not identical, 5 nodes are of 64 cores and 125G ram, and 5 nodes are of 64 cores and 256G ram. Most of the time I keep getting alerts saying the node CPU/MEMORY is high and I see the pods are getting restarted, as it is consuming 92-95% of CPU and memory on certain nodes, I want to apply CPU and Memory Allocation on nodes so that the CPU utilization doesn't go very high.

I tried manually editing the node configuration but that did not work.

Any leads for this will be helpful!

CodePudding user response:

In K8s, you can limit the resources usage for the pod containers and reserve a some cpus/memory for the container to avoid this problem:

---
apiVersion: v1
kind: Pod
metadata:
  name: <pod name>
spec:
  containers:
  - name: c1
    image: ...
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  - name: c2
    image: ...
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

CodePudding user response:

Found the Kubernetes document for setting the node level allocatable resource.

Fixed using the below document https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/

  • Related