Home > Software design >  pods still there when run kubectl delete pods
pods still there when run kubectl delete pods

Time:12-29

I want to remove zk and kafka from my k8s

$ kubectl get pods
NAME               READY   STATUS             RESTARTS   AGE
kafka1-mvzch       1/1     Running            1          25s
kafka2-m292k       0/1     CrashLoopBackOff   8          20m
zookeeper1-qhmnf   1/1     Running            0          20m
zookeeper2-t7r8w   1/1     Running            0          20m
$kubectl delete pod kafka1-mvzch kafka2-m292k zookeeper1-qhmnf zookeeper2-t7r8w
pod "kafka1-mvzch" deleted
pod "kafka1-m292k" deleted
pod "zookeeper1-qhmnf" deleted
pod "zookeeper2-t7r8w" deleted

but when I run get pods, it still shows the pods.

And I got no service and deployment

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   7h1m
$ kubectl get deployment
No resources found in default namespace.

CodePudding user response:

You are removing the pods, and they will be deleted. But there is some other construct that re-creates pods to replace the (now deleted) previous pods.

In fact, the names of the pods with the random-looking suffix suggest that there is another controller operating the pods. When looking at the linked tutorial, you notice that a ReplicationController is created. This ensures the pods.

If you want to remove it, remove the replication controller; the pods will be deleted as well.

CodePudding user response:

You can use kubectl get pod -ojsonpath='{.metadata.ownerReferences}' to identify the owner object of the pods. The owner might be a Deployment, StatefulSet, etc.

Looking at the medium.com guide that you mentioned, I see that they suggest to create ReplicationControllers. You can cleanup your namespace by running kubectl delete replicationcontroller --all.

  • Related