Difference between revisions of "Pod deletion in k8s"
Jump to navigation
Jump to search
(Created page with "https://stackoverflow.com/questions/53405701/can-not-delete-pods-in-kubernetes") |
|||
Line 1: | Line 1: | ||
https://stackoverflow.com/questions/53405701/can-not-delete-pods-in-kubernetes | https://stackoverflow.com/questions/53405701/can-not-delete-pods-in-kubernetes | ||
+ | |||
+ | ``` | ||
+ | for the reason. | ||
+ | |||
+ | Kubernetes has some workloads (those contain PodTemplate in their manifest). These are: | ||
+ | |||
+ | Pods | ||
+ | Controllers (basically Pod controllers) | ||
+ | ReplicationController | ||
+ | ReplicaSet | ||
+ | Deployment | ||
+ | StatefulSet | ||
+ | DaemonSet | ||
+ | Job | ||
+ | CronJob | ||
+ | See, who controls whom: | ||
+ | |||
+ | ReplicationController -> Pod(s) | ||
+ | ReplicaSet -> Pod(s) | ||
+ | Deployment -> ReplicaSet(s) -> Pod(s) | ||
+ | StatefulSet -> Pod(s) | ||
+ | DaemonSet -> Pod(s) | ||
+ | Job -> Pod | ||
+ | CronJob -> Job(s) -> Pod | ||
+ | a -> b means a creates and controls b and the value of field .metadata.ownerReference in b's manifest is the reference of a. For example, | ||
+ | |||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | ... | ||
+ | ownerReferences: | ||
+ | - apiVersion: apps/v1 | ||
+ | controller: true | ||
+ | blockOwnerDeletion: true | ||
+ | kind: ReplicaSet | ||
+ | name: my-repset | ||
+ | uid: d9607e19-f88f-11e6-a518-42010a800195 | ||
+ | ... | ||
+ | This way, deletion of the parent object will also delete the child object via garbase collection. | ||
+ | |||
+ | So, a's controller ensures that a's current status matches with a's spec. Say, if one deletes b, then b will be deleted. But a is still alive and a's controller sees that there is a difference between a's current status and a's spec. So a's controller recreates a new b obj to match with the a's spec. | ||
+ | |||
+ | The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment. | ||
+ | |||
+ | $ kubectl get deploy -n {namespace} | ||
+ | |||
+ | $ kubectl delete deploy {deployment | ||
+ | ``` |
Latest revision as of 00:36, 3 July 2023
https://stackoverflow.com/questions/53405701/can-not-delete-pods-in-kubernetes
for the reason. Kubernetes has some workloads (those contain PodTemplate in their manifest). These are: Pods Controllers (basically Pod controllers) ReplicationController ReplicaSet Deployment StatefulSet DaemonSet Job CronJob See, who controls whom: ReplicationController -> Pod(s) ReplicaSet -> Pod(s) Deployment -> ReplicaSet(s) -> Pod(s) StatefulSet -> Pod(s) DaemonSet -> Pod(s) Job -> Pod CronJob -> Job(s) -> Pod a -> b means a creates and controls b and the value of field .metadata.ownerReference in b's manifest is the reference of a. For example, apiVersion: v1 kind: Pod metadata: ... ownerReferences: - apiVersion: apps/v1 controller: true blockOwnerDeletion: true kind: ReplicaSet name: my-repset uid: d9607e19-f88f-11e6-a518-42010a800195 ... This way, deletion of the parent object will also delete the child object via garbase collection. So, a's controller ensures that a's current status matches with a's spec. Say, if one deletes b, then b will be deleted. But a is still alive and a's controller sees that there is a difference between a's current status and a's spec. So a's controller recreates a new b obj to match with the a's spec. The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment. $ kubectl get deploy -n {namespace} $ kubectl delete deploy {deployment