I have 2 Spring Boot applications let's say named App1 and App2. There is a table in oracle DB with status column. when App1 runs a scheduler, it will update that status column to ON or OFF.
App2(with multiple instances) is a Kafka consumer app, whenever it consumes an event it needs the status value from oracle to do some operation. I don't want to call oracle db every-time whenever the app consume an event, With initial application load I thought of fetching status value from DB and keeping that value in application context(in-memory) so that I don't need to make db calls.
But Now the problem is, while keeping the status value in-memory, How would App2 know when status value gets updated in oracle table (App1 updated the table). It needs to change that value in application context as well.
How can I achieve that, are there any oracle table change listeners in spring boot or polling the data from database would be a good approach ??.
Please any suggestion would be helpful.
Note: : This table gets updated only 4-5 times a day.
CodePudding user response:
It sounds more easy than what you describe
With initial application load I thought of fetching status value from DB and keeping that value in application context(in-memory) so that I don't need to make db calls.
Expose another endpoint in App2 where when invoked it updates the value it uses for status column which was initial loaded when application started up. This endpoint should be invoked from the scheduler of App1 that is the one that invokes the changes in DB.
So in a simple sense the App1 says hey App2 instances I have just done some changes in DB. Please update the status you use from database so that you are up to date for your exchanges.
Edit: If you afford having such complex infrastructure you can take a look on spring-cloud-bus to broadcast a change in multiple services.
CodePudding user response:
You can have App1 send an event to a specific topic to let App2 know something has changed and update its internal state accordingly.
Note that each instance would have to be in its own consumer group so that all can listen to the same event - you can assign a random UUID as consumer group.
Also, this should be very fast but there are no guarantees you wouldn’t get a few records on an outdated state. Anyway, in distributed systems you should generally embrace eventual consistency rather than try to make it behave like a monolith.
If you can’t make App1 send an event, there are such tools as Debezium https://debezium.io/ which listens to the DB and send events to Kafka, but that might be an overkill.
Another way would be having a distributed cache such as Redis, having App1 update it, and App2 checking it every time, which should be more performant than going to the DB.