Home > front end >  Is there a way to ensure that a service/app that I have deployed heroku, stays running during an upd
Is there a way to ensure that a service/app that I have deployed heroku, stays running during an upd

Time:11-22

Currently I am working on deploying a service on Heroku. The service polls an api and uses a regular setInterval method in Node.js to do this to get new information (I cannot use a websocket since the api I am using is not mind).

Periodically just like for any program or app that somebody develops, it needs to be updated. My question is what is the best way to update my app on heroku so that when I push a new change to production it does not stop the app from running? The reason is because I don't want to miss new data from the api.

My first thought is to have some sort of redundancy where there are at least 2 servers running the app and as one updates the other runs and vice versa. Yet, I am not sure if this is done automatically by heroku, or if there is not, how can I ensure I set up something like this to not miss a piece of data during an update. Missing even one new piece of data from the API is not an option for us.

Are there other ways to accomplish this?

CodePudding user response:

There is no automatic process for zero-downtime updates in nodejs or on Heroku. If you can't or don't want the downtime from updating the source files, then stopping and restarting the server, then you will have to do something like you were already thinking with two separate processes where one keeps running while the other is getting updated and then vice versa.

You haven't said much about what this app does other than polls some external API and you haven't said what it does with the data it gets from that API. Both of those are relevant to the details of how you would implement this.

I'll make a few assumptions about those unknowns for purposes of illustration. Let's suppose that this service app just updates a database with the newly found data from the API and does not start any server processes of its own.

In that case, you design your service so you can run two instances and so it accepts a command line argument for whether it should start out polling the API or whether it should just start up and wait. You give your server its own control port (typically an http server that accepts requests only locally, not from the internet) that lets you send it commands. And, you make the port that the control port uses configurable, either from a command line argument or an environment variable so that you can start multiple instances of your app, each with a different control port.

Then here's a sequence of events:

  1. Start up one instance of your server and pass it the command line argument to tell it to immediately start polling.
  2. It runs this way until you have an update ready.
  3. Then, when you have an update ready, you update the local code which won't affect the running instance since its code is already loaded.
  4. After the update, you start up a second instance of your server (running the new code) and pass it a command line argument telling it not to start polling yet. At this point, you have two instances of the service running, one (running the old code) is polling the API, the other (running the new code) is on standby.
  5. Then, you send each app a command on their control port. The first app (running the old code) is told to stop polling and shut-down. The second app (running the new code) is told to start polling.
  6. You now have the new code up and running and polling without any interruption.

You can automate this switch-over with your favorite shell script.

If your server also has an incoming http server to serve requests from the internet, then the command that tells the 2nd instance to start polling will also will have to startup its http server for internet requests after the previous instance shut theirs down.

With a load balancer or dynamically configurable proxy, you could have both web servers running on different local ports and dynamically change the load balancer/proxy configuration to route all incoming requests to the 2nd instance before shutting down the 1st instance and not miss any incoming web requests.

  • Related