K8S Version: 1.23
I have an hourly CronJob with 5 inner pods. After some time, part of these pods goes to shutdown and moved to the Completed
state (it's expected), but the rest - still works, so have a Running
state.
At the next hour, this CronJob will not be triggered due to the Running
pods (it's also expected). But I need to force recreate pods with the Completed
state if a Running
still exists. Is it possible?
CodePudding user response:
It appears that the reason you're letting the Running
pods exist is that you expect those pods to take a long time before finishing. This means that the scheduling of those pods should be different from others, since they can take a longer time to finish.
You can split your CronJob
into two CronJobs
. One of these will run every hour and will only have the pods that get Completed
. The other will run less frequently (maybe every 2 hours?) allowing the Running
pods to finish.
This way, you will be able to manage your cron
tasks separately.
Note: For k8s version, we usually mention the API version which is of the form
v1.xx
. It appears you are reporting the version of a public cloud offering of k8s likeAKS
,EKS
orGKE
. Just FYI.