In the documentation here it is stated that deleting a pod is a voluntary disruption that PodDisruptionBudget
should protect against.
I have created a simple test:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: test
spec:
minAvailable: 1
selector:
matchLabels:
app: test
---
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: myimage
Now if I run apply
and then delete pod test
, there is no trouble deleting this pod.
If I now run cordon node
, then it is stuck as it cannot evict the last pod (which is correct). But the same behavior seems to not be true for deleting the pod.
The same goes if I create a deployment with minimum 2 replicas and just delete both at the same time - they are deleted as well (not one by one).
Do I misunderstand something here?
CodePudding user response:
The link in your question refer to static pod managed by kubelet, guess you want this link instead.
...if I run apply and then delete pod test, there is no trouble deleting this pod
PDB protect pod managed by one of the controllers: Deployment, ReplicationController, ReplicaSet or StatefulSet.
...if I create a deployment with minimum 2 replicas and just delete both at the same time - they are deleted as well (not one by one)
PDB does not consider explicitly deleting a deployment a voluntary disruptions. From the K8s documentation:
Caution: Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses Pod Disruption Budgets.
Hope this help to clear the mist.