Our team designed 2 microservices in 2 distinct GitHub repositories.
- First one is responsible for transforming file inputs to data inserted in a (graph/neo4j) database.
- Second one is a REST API responsible for read-only queries to that database, returning JSONs.
Since these are very tightly-coupled services (if the algorithm to populate/update the data in database is wrong, data will be corrupted), my approach was to create units tests on both repos with curated input/expected outputs i.e. prepared input files to expected nodes, then prepared input nodes to expected JSONs
Is that approach making sense? what about integration tests that I suspect to be overlapping in definition here ? are they needed? what would they cover that units tests won't in such scenario?
CodePudding user response:
I wouldn't say these were tightly coupled. They're both coupled to the database, but that provides a layer of seperation form each other. Anything which crosses compoenent is a kind of integration test anyway.
In the case of the API, the "real" unit testing approach would be to mock the database calls, so it's not part of the situation. How straight forward / rebost that is depends on how involved it is. If that's not practical, restore a test database to a known state before you being.
In the case of file loading the same basic idea applies, although mocking db calls for a data load process is likely to be brittle. If you use a real database, you'd need to start it from a known state, then validate it's state afterwards. Question to ask yourself: It is really one operation, or are "parse file" and "write content to db" two units?