Home > Blockchain >  few instances of same stateful application - not database - in Kubernetes - how is it managed?
few instances of same stateful application - not database - in Kubernetes - how is it managed?

Time:01-10

I have my main application which has its own unique state, let's call it Application A. This application A starts a few processes which does some parsing work and then it collects it and should send it to a database server outside of the Kuberentes cluster.

I would like to run a few copies of this application A in different pods. however, each instance is unique and cannot be replaced as it has its own state. it means that each client has to talk only with the same instance it started the communication with http requests.

  1. How can it be done in Kubernetes?
  2. do I need to define StatefulSet component?
  3. how do I manage that each client (from outside the cluster) will talk every time with the same instance he started communication on the same object id ? for example to get status on that object.
  4. in case the pod die I don't want to recover. is that possible?

CodePudding user response:

For the number 4 question. You only need to set up the container-restart-policy. I used this flag to create a pod with this feature: --restart=Never

IMHO, It is not a Kubernetes problem. You could have this scenario in other environments. The idea is to use sticky sessions to have an affinity for all your request. You probably need to search for this setup in your ingress controller documentation. E.g Nginx Ingress

CodePudding user response:

1: yes, sort of

2: not necessarily, but might simplify some things

3: if you use ingress, you can use different methods to maintain backend affinity ie. cookie based, source IP based etc. (nginx example: https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/)

4: you might want to set restartPolicy to Never

With all that said, this really sounds like a bad idea. You should either allow shared state (ie. redis), or statefulset with ability to restart with the same state loaded from local storage. You need to remember that even with the most optimal setup things like this can break (ie. switch to different pod when a backing pod went down, node rescheduling due to cluster scaling etc.)

  • Related