Home > database >  In kubernetes, is there a way to make statefulset pods linger to finish requests on rolling update?
In kubernetes, is there a way to make statefulset pods linger to finish requests on rolling update?

Time:12-17

In Kubernetes, I have a statefulset with a number of replicas. I've set the updateStrategy to RollingUpdate. I've set podManagementPolicy to Parallel. My statefulset instances do not have a persistent volume claim -- I use the statefulset as a way to allocate ordinals 0..(N-1) to pods in a deterministic manner.

The behavior I want, when doing a rolling update, is for the previous statefulset pods to linger while there are still long-running requests processing on them, but I want new traffic to go to the new pods in the statefulset.

Unfortunately, I don't see a way of doing this -- what am I missing?

Because I don't use volume claims, you might think I could use deployments instead, but I really do need each of the pods to have a deterministic ordinal, that:

  • is unique at the point of dispatching new service requests (incoming HTTP requests,)
  • is persistent for the duration of the pod lifetime
  • is contiguous from 0 .. (N-1)

The second-best option I can think of is using something like zookeeper or etcd to separately manage this property, using some of the traditional long-poll or leader-election mechanisms, but given that kubernetes already knows (or can know) about all the necessary bits, AND kubernetes service mapping knows how to steer incoming requests from old instances to new instances, that seems more redundant and complicated than necessary, so I'd like to avoid that.

CodePudding user response:

The behavior I want, when doing a rolling update, is for the previous statefulset pods to linger while there are still long-running requests processing on them, but I want new traffic to go to the new pods in the statefulset.

This behavior is supported by Kubernetes pods. But you also need to implement support for it in your application.

  • New traffic will not be sent to your "old" pods.
  • A SIGTERM signal will be sent to the pod - your application may want to listen to this and do some action.
  • After a configurable "termination grace period", your pod will get killed.

Be aware that you should connect to services instead of directly to pods for this to work. E.g. you need to create headless services for the replicas in a StatefulSet.

See Kubernetes best practices: terminating with grace for more info.

CodePudding user response:

You may be able to do this using Container Lifecycle Hooks, specifically the preStop hook.

We use this to drain connections from our Varnish service before it terminates.

However, you would need to implement (or find) a script to do the draining.

  • Related