Home > Back-end >  kubernetes - Why is there a max pods per node?
kubernetes - Why is there a max pods per node?

Time:01-20

Why is there a pod limit in Kubernetes?

It makes intuitive sense to me that there'll be some limitation, but I'm curious to know the specific botteleneck which warrants the limit.

CodePudding user response:

Some vendors have additional limitations.

For example, on Azure, there's a limit on the number of IP addresses you can assign to a node. So if your Kubernetes cluster is configured to assign a IP address from Azure VNet to each pod, the limit is 30 (See https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node).

On IBM Cloud, if you use IBM Block Storage for persistent volumes, they will be mounted as 'Secondary volumes' on your node, and you can only have 12 of those per node, so that's limit of 12 pods with persistent volumes. It sucks when you hit that limit when scaling up the first time :-( On other vendors or with other storage classes, this limit is larger: https://kubernetes.io/docs/concepts/storage/storage-limits/

The default limit of 110 pods per node is merely a compromise of Kubernetes, I think, not a technical limit.

CodePudding user response:

As mentioned in the official documentation:

Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components. These resource limits apply to addon resources just as they apply to application workloads

You can also refer for more information.

.

  • Related