Home > Back-end >  Multiple asserts and multiple actions on integration test
Multiple asserts and multiple actions on integration test

Time:11-22

I have a integration test that should test the creation of a new account in a CRM software. The account creation triggers several things:

  • Creates the basic profile of the company
  • Creates every user (you can define the number of users on the registration)
  • Initialize the basic configuration of the account
  • Sends a welcome email with the starting information
  • etc

The test checks every aspect with several asserts, but I don't know if this is correct or if I should do a separate test for every one. If I go for separate tests, the setup would be the same for all, so I feel like it would be a waste of time.

CodePudding user response:

What you explain there sounds more like an end-to-end test. It's ok to have some end-to-end tests, but they are usually very expensive to write, to maintain, and brittle.

For me, the tests in a service should give you confidence that the software you are delivering will work in production. So maybe it's ok to have a very small number of end-to-end tests that check that everything is glued together properly, but most of the actual functionality should be in normal tests. An example of what I would try to avoid is to is have an end-to-end test that checks what happens when a downstream service is down.

Another very important aspect is that tests are written for other developers, they are not written for the compiler, so keeping a tests simple is important for maintainability. I want to stress this because if a test has 10 lines of assertions, that will be unreadable for most developers. even a test of 10 lines of code is difficult to grok.

Here's how I try to build services:

If you are familiar with ATDD and hexagonal architecture, most of the features should be tested stubbing the adaptors, which allows the tests to run super fast and fiddle with the adapters using test doubles. These tests shouldnt interact with anything outside the JVM, and give one a good level of confidence that the features will work. If the feature has too many side effects, I try to pick the assertions carefully. For example if a feature is to create an account, I won't check that the account is actually on the DB (because the chances of that breaking are minuscle), but I would check that all messages that need to be triggered are sent. Sometimes I do create multiple tests if the test starts to become unclear. For example one tests that checks the returned value and another tests that verifies the side effects (e.g. messages being produced).

Having as minimum a good coverage of the critical code with unit tests and integration tests (here I mean test classes that interact with external services) builds up the confidence that the classes work as expected. So end-to-end tests don't need to cover the miriad of combinations.

And last a very small number of end-to-end tests to ensure everything is glued together nicely.

Bottom line: create multiple test with the same setup if it helps understanding the code.

edit

About integration tests: It's just terminology. I call integration test a class or small group of classes that interact with an external service (database, queue, files, etc); A component test is something that verifies a single service or module; and end-to-end test something that tests all the services or modules working together.

What you mentioned about stored procs changes the approach. Do you have unit tests for them? Otherwise you could write some sort of integration tests that verify the stored procs work as expected.

About readability of the test: for me, the real test is to ask someone from another team or a product owner and ask them if the test name, the setup, what is asserted and the relatioship between those things is clear. If they struggle, it means that the test should be simplified.

  • Related