Home > database >  How to keep a variable value accross pod restarts?
How to keep a variable value accross pod restarts?

Time:05-20

I'm trying to build an application that stores the specs of certain Kubernetes resources in a variable.

It then sleeps for a pre-defined time like 7 days, then the application runs again, and compares the specs of the same resources now to the specs previously stored in a variable.

The problems are:

  1. If the application pod restarts for any reason, like a node rotation, it will lose its stored specs and won't be able to compare them.

  2. How to make sure that the application will return to the exact point it was when it died? For example, what if the application's pod dies right during the sleep function?

I've heard that StatefulSet is the answer, but does it guarantee problem number 2 doesn't happen?

CodePudding user response:

Pods that store durable data should make use of Volumes. If each Pod needs its own volume than StatefulSets are the right choice, since K8s will take care of reattaching Pods to their volumes after restarts.

Pods in K8s will not natively resume where they left off, since a Pod restart triggers the termination of the original application process(es). There is no support for pausing processes. So, you would need to take care of that kind of behavior in your application code.

CodePudding user response:

Going to need more details about your case, but in general k8s is not for managing pod's internal states as in have the capability of "hibernation".

You might want to consider approaches listed in https://github.com/spf13/viper#remote-keyvalue-store-example---unencrypted

For example by leveraging consul, etcd, firestore, etc.

In general you need to look at on how to maintain the states at application level regardless whether if you are going to use "remote config" or k8s volume approach.

  • Related