My question is regarding Helm tests and their basic philosophy.
The simple question is, should the helm test pod be a "full copy" of the production pod?
Usually, in all examples I have seen, the test pod uses a stripped-down Dockerfile with essential things like curl, and the test.yaml file only includes host/port/secrets to a service. The test itself usually only focuses on pinging or connecting to a service.
But is this only what the helm test philosophy is about? Was the idea behind helm tests that the test pod should be a full replica of the production pod and run basically any test you wish?
CodePudding user response:
Hopefully you've run some more targeted tests on your pod before you're building and deploying an image. The sequence I generally use looks something like this:
- Outside a container, run the service's unit tests (RSpec/JUnit/PyTest/Jest/catch2/shunit/...).
- Once those tests pass, build the Docker image and push it to a repository.
- Deploy the candidate image to a test cluster.
- Run integration tests making pure HTTP requests, or posting messages to a Kafka or RabbitMQ queue, and observe the results from the outside.
- Promote the image to production.
In this sequence, notice that the set of tests that depend on the code proper (the unit tests) run before we build a container at all, and the tests against the deployed system (the integration tests) test the external interface of the container in the deployed environment. In neither case do you need the application code, the tests, and "in a container" all at the same time.
I'd expect helm test
tests to be maybe a little more of a smoke test than a full integration test; check that the service is alive and can answer requests with a correct format. It's true that the test is an arbitrary Job and can run an arbitrary container with arbitrary code, and that could include the full application as its image. But running tests on something "next to" the deployed pod isn't the same as running tests on the deployed pod itself.