Home > Software engineering >  Understanding Kubernetes eviction algorithm
Understanding Kubernetes eviction algorithm

Time:12-08

In have a situation where the node has 4GB memory and actual memory usage looks as below:

Pod Memory Requested Memory Limit Memory Used
1 2.0 GiB 3.0 GiB 1.0 GiB
2 2.0 GiB 3.0 GiB 1.0 GiB
Free 0.0 GiB 0.0 GiB 2.0 GB

Since there is free memory, nothing gets evicted.

But now let's say both pods 1 and 2 start doing real work, and the situation changes to

Pod Memory Requested Memory Limit Memory Used
1 2.0 GiB 3.0 GiB 3.0 GiB
2 2.0 GiB 3.0 GiB 2.0 GiB

and the Kubernetes eviction algorithm gets triggered.

In such a situation which pod will be evicted? Will it be pod1 or pod2?.

I have already checked pod selection rules, but still not able to get an understanding of how eviction will work in this case.

CodePudding user response:

In your example, pod 1 will get evicted. The Pod that is not using more memory than it requested will not get evicted.

This is mentioned in the Kubernetes documentation you link to:

The kubelet uses the following parameters to determine the pod eviction order:

  1. Whether the pod's resource usage exceeds requests
  2. Pod Priority
  3. The pod's resource usage relative to requests

In your example, pod 2's resource usage does not exceed requests (memory requested=2 GiB, actual use=2 GiB) so it is removed from the algorithm. That leaves pod 1 as the only pod remaining, and it gets evicted.

Say pod 2 is also above its request. Then for both pods, subtract actual utilization from the request, and the pod that's the most over its limit gets evicted.

Let's look at a little more complex example on a hypothetical 8 GiB node:

Pod Requested Actual Excess use
1 4.0 GiB 4.0 GiB 0.0 GiB
2 1.0 GiB 2.0 GiB 1.0 GiB
3 1.0 GiB 1.3 GiB 0.3 GiB
4 0.0 GiB 0.8 GiB 0.8 GiB

Pod 1 is using the most memory, but it is within its requests, so it is safe. Subtracting actual use from requests, pod 2 is using the most excess memory and it is the one that will get evicted. Pod 4 hasn't declared resource requests at all, and while it's safe in this scenario, it's at risk in general; absent pod 2, it's the pod using the most memory above its requests, even though it's the second-least absolute memory.

  • Related