I am new to kubernetes. So, I created few pods. Then I deleted all pods using
kubectl delete pods --all
But output of df -h
still shows kubernetes consumed disk space.
Filesystem Size Used Avail Use% Mounted on
/dev/root 194G 19G 175G 10% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 1.6G 2.2M 1.6G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop2 56M 56M 0 100% /snap/core18/2246
/dev/loop1 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop3 56M 56M 0 100% /snap/core18/2253
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop5 44M 44M 0 100% /snap/snapd/14295
/dev/loop6 62M 62M 0 100% /snap/core20/1242
/dev/loop7 43M 43M 0 100% /snap/snapd/14066
/dev/loop8 68M 68M 0 100% /snap/lxd/21803
/dev/loop9 62M 62M 0 100% /snap/core20/1270
tmpfs 1.6G 20K 1.6G 1% /run/user/123
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/a2054657-e24d-434f-8ba5-b93813a405fc/volumes/kubernetes.io~secret/local-path-provisioner-service-account-token-4hkj6
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/fa06c678-814f-4f98-8d2d-806e85923830/volumes/kubernetes.io~secret/metrics-server-token-pjbwh
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/956d3b341a87e4232792ebf1ad0925f07c180d6d86de149a6ec801f74c0b47f8/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/babfe080e5ec18297a219e65f99d6156fbd8b8651950a63052606ffebd7a618a/rootfs
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/4e3b15c1-f051-42eb-a3d1-9b3de38dae12/volumes/kubernetes.io~secret/default-token-lnpwv
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/df53096e-f89b-4fc7-ab8a-672d841ac44f/volumes/kubernetes.io~secret/coredns-token-sxtjn
tmpfs 7.8G 8.0K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/ssl
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/traefik-token-46qmp
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/39b88e479947c9240a7c5233555c7a19b29f3ccc7bd1da117251c8e8959aca3c/rootfs
shm 64M 0 64M 0%
What are these spaces showing in df -h ? How to free up these spaces ?
EDIT:
I noticed that pods are restarting after I delete them.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mylab-airflow-redis-0 1/1 Running 0 33m
mylab-airflow-postgresql-0 1/1 Running 0 34m
mylab-postgresql-0 1/1 Running 0 34m
mylab-keyclo-0 1/1 Running 0 34m
mylab-keycloak-postgres-0 1/1 Running 0 34m
mylab-airflow-scheduler-788f7f4dd6-ppg6v 2/2 Running 0 34m
mylab-airflow-worker-0 2/2 Running 0 34m
mylab-airflow-flower-6d8585794d-s2jzd 1/1 Running 0 34m
mylab-airflow-webserver-859766684b-w9zcm 1/1 Running 0 34m
mylab-5f7d84fcbc-59mkf 1/1 Running 0 34m
Edited
So I deleted the deployments.
kubectl delete deployment --all
Now, there are no deployments.
$ kubectl get deployment
No resources found in default namespace.
Then after, I stopped the cluster.
systemctl stop k3s
Disk space is still not released.
Output of latest disk usage.
Filesystem Size Used Avail Use% Mounted on
/dev/root 194G 35G 160G 18% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 1.6G 2.5M 1.6G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop2 56M 56M 0 100% /snap/core18/2246
/dev/loop1 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop3 56M 56M 0 100% /snap/core18/2253
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop5 44M 44M 0 100% /snap/snapd/14295
/dev/loop6 62M 62M 0 100% /snap/core20/1242
/dev/loop7 43M 43M 0 100% /snap/snapd/14066
/dev/loop8 68M 68M 0 100% /snap/lxd/21803
/dev/loop9 62M 62M 0 100% /snap/core20/1270
tmpfs 1.6G 20K 1.6G 1% /run/user/123
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/a2054657-e24d-434f-8ba5-b93813a405fc/volumes/kubernetes.io~secret/local-path-provisioner-service-account-token-4hkj6
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/fa06c678-814f-4f98-8d2d-806e85923830/volumes/kubernetes.io~secret/metrics-server-token-pjbwh
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/956d3b341a87e4232792ebf1ad0925f07c180d6d86de149a6ec801f74c0b47f8/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/rootfs
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/4e3b15c1-f051-42eb-a3d1-9b3de38dae12/volumes/kubernetes.io~secret/default-token-lnpwv
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/df53096e-f89b-4fc7-ab8a-672d841ac44f/volumes/kubernetes.io~secret/coredns-token-sxtjn
tmpfs 7.8G 8.0K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/ssl
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/traefik-token-46qmp
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/39b88e479947c9240a7c5233555c7a19b29f3ccc7bd1da117251c8e8959aca3c/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/6eddeab3511cf326a530dd042f5348978c6ba98bf8d595c2936cb6f56e30f754/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/6eddeab3511cf326a530dd042f5348978c6ba98bf8d595c2936cb6f56e30f754/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/78568d4850964c9c7b8ca5df11bf532a477492119813094631641132aadd23a0/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/14d87054e0c7a2a86ae64be70a79f94e2d193bc4739d97e261e85041c160f3bc/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/0971fe44fc6f0f5c9e0b8c1a0e3279c20b3bc574e03d12607644e1e7d427ff65/rootfs
tmpfs 1.6G 4.0K 1.6G 1% /run/user/1000
Output of ctr containers ls
# ctr container list
CONTAINER IMAGE RUNTIME
CodePudding user response:
There are mandatory data to be maintain when a cluster is running (eg. default service token). When you shutdown (eg. systemctl stop k3s) the cluster (not just delete pods) these will be released.