I am in the process of revamping my projects End-To-End testing, and we have come to a crossroads on what makes a good E2E test. We want to use live data from the actual servers, but due to the multiple testing environments we run our app in we cannot guarantee we will get the same data every time which means sometimes the tests will pass and sometimes they wont (which isn't acceptable). The most common solution is using stubbed/mocked data and never calling the actual endpoints, but we feel this means the test is no longer a true E2E test. Has anyone else come to this conflict between unreliable live data and not wanting to use stubbing to fake the endpoints? If so, what was the solution they found?
BTW: We are building a web app with an Angular front-end, .Net back-end, and Cypress to run our E2E Testing
CodePudding user response:
In my experience E2E are conducted in a staging type environment that is a close replica of the "live" environment settings.
You should not be mocking the API calls for an E2E, you want to call your services/dependencies with mocked requests and you're application should handle both happy and unhappy paths. You want to test the integrity of each touch-point in your application.
What we found has helped us write less of E2E tests, is to focus on writing unit tests and integration tests, here is an article that I found quite helpful https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html
CodePudding user response:
The entire point of an E2E test is to test all aspects of the system, including any API integrations you may have, whether you can or cannot control them.
This is the differentiating factor between an end to end test & an integration test.
You mention unreliable data as the reason you don't want to call the actual endpoints. This unreliable data is resulting in failed tests.
This is exactly the use case for an E2E test - you need to test your systems against unreliable data and see how they react. The failing test needs to be fixed as either you are not handling unreliable data properly, or are only testing for the correct API responses.
The key distinction here is that you're not looking for the E2E test to verify that everything is on the happy path, or that the API is always working.
You are trying to verify that the application can handle all paths & that you can handle any API response.
This means E2E tests both for, verifying x behaviour happens when the response is expected, as well as verifying y behaviour happens when the response is unexpected.
After all, if the API can return unreliable data, then your code - and eventually, a user - will have to deal with it somehow. It is much, much, much better to ensure that your system has guard rails in place with E2E tests as opposed to letting your user test your API for you.
Another advantage is that you can also ensure that any API changes that perhaps aren't communicated beforehand for whatever reason or sudden accidental breaking changes are caught by your E2E tests. For example, these can be running on every pipeline build (which should be triggered very often if you are doing continuous integration).
At the very least, they should be running after any deployment to a production emvironment.
E2E tests will allow you to firstly make the API consumer more resilient based on your test results & secondly relay issues that can be fixed with the API back to the team responsible for it.
In conclusion, don't fake, mock or stub API endpoints for E2E tests.
Test every feature of the system, including, but not limited to, API error handling.
But don't fall into the trap of ignoring unit tests or integration tests, or writing more E2E tests than unit/integration tests - they're the most important tests in any case, especially for the API in this scenario.