I’ve build a cpp program that performs the following workflow sequentially:
Read serialized data (0.3 ms)
Receive search metadata (0.00.. ms)
Search data (0.01 ms)
Return search data(0.00.. ms)
Right now, I run the program with nodejs shell exec and serve it with an express api. The api is not used by many users, so reading the data will be performed just once a while. But one user will do like 20 queries on the same data. So right now, reading is performed 20 times on the same data.
Obviously reading the data takes most of the time. Additionally, the data that is being read never changes. To speed up the workflow, I want to read the data once, then wait for requests. Just like a MySQL db, that is waiting for statements.
What would be your approach to do that?
- Should I build a socket server? (Feels overkill)
- Should I try to run the Programm in background, store the pid and use a nodejs shell exec solution? (Feels hacky)
I think I’m missing the right terms. And it would be awesome if you can push me in the right direction!
CodePudding user response:
You may call C directly from nodejs which should save you from overhead either executing the shell or building the socket server.
Eg:
int Feed(const char* searchMetadata)
which returns a key of data if you want multiple metadata being saved in C part.
Then call const char* search(int dataKey, const char* searchingKeyword)
to search the meta data. Then call search() as many times as you need.
It's also doable to cache the data as a file for C . However you consider 0.3ms is an overhead, then it's better to avoid file open/save.