I have a docker-compose file with the following services:
- backend
- database
- frontend
- database migration
- cypress (E2E tests)
The problem I'm struggling with is when I run docker-compose up
, the first service to start up is cypress
.
Cypress immediately tries to reach backend
and fails, giving the error:
> http://backend:8000
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Then, Cypress service exits and the all the other services start up.
I've been trying to force Cypress to wait for the other services to be ready using:
But none of these scripts work as I'd expect.
Expected behaviour:
- Cypress starts up, triggers the wait-for script
- All other services start up in the background
- wait-for script detects services are ready, runs cypress tests
Actual behaviour:
- Cypress starts up, triggers the wait-for script
- wait-for script blocks the thread, trying to reach i.e.
backend
until it reaches the timeout - Cypress service fails
- All other services start up
I can't see what is wrong with my setup, but I bet this isn't how these wait-for
scripts should work.
Does anyone have any idea how can I change the behaviour to meet the expectations?
Below is my setup for wait-on
version: "3.7"
x-backend-base: &backend_base
env_file: .env
build:
context: ./backend/
dockerfile: ./compose/backend/Dockerfile
depends_on:
- db
[...]
services:
db:
image: postgres:9.6-alpine
backend:
<<: *backend_base
command: python manage.py runserver 0.0.0.0:8000
cypress:
build:
context: .
dockerfile: ./docker/cypress/Dockerfile
entrypoint: tail -f /dev/null
environment:
CYPRESS_BASE_URL: http://backend:8000
[...]
I have CI configured to call yarn test
command which I have configured in package.json
to be:
"test": "wait-on --delay 3000 --timeout 180000 http://backend:8000/login/ && cypress run",
Setup for using docker-compose-wait
script:
package.json
:
"test": "/wait && cypress run",
docker-compose.yml
cypress:
build:
context: .
dockerfile: ./docker/cypress/Dockerfile
environment:
CYPRESS_BASE_URL: http://backend:8000
WAIT_HOSTS: backend:8000,
Log:
Executing yarn test on cypress service
yarn run v1.22.17
$ /wait && cypress run
[INFO wait] --------------------------------------------------------
[INFO wait] docker-compose-wait 2.9.0
[INFO wait] ---------------------------
[DEBUG wait] Starting with configuration:
[DEBUG wait] - Hosts to be waiting for: [backend:8000]
[...]
[DEBUG wait] --------------------------------------------------------
[INFO wait] Host [backend:8000] not yet available...
[...]
[INFO wait] Host [backend:8000] not yet available...
[ERROR wait] Timeout! After 45 seconds some hosts are still not reachable
error Command failed with exit code 1.
The same behaviour I get for wait-for-it.sh
I can confirm the Cypress baseUrl
is correct - sometimes when I retry a failed job on a CI, the applications services are getting loaded faster and then Cypress tests pass.
CodePudding user response:
You can use depends_on
with a condition
.
services:
backend:
health_check: curl -fsS localhost:8080/ping
cypress:
depends_on:
backend:
condition: service_healthy
It may also be sufficient to use the condition service_started
which is equivalent to a depends on with no condition using the array syntax, afaik.
Some people are also doing it the brutal way by restarting until it suceeds.
services:
cypress:
deploy:
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 10
Personally, I think it's best if your services are able to boot even though a given dependency isn't up. They could retry to connect with exponential backoff and, until success, respond with something like a 503, for the endpoints that require this dependency.
Additionally, check for connectivity to said dependency in a readiness endpoint. In that case, some operating framework like Kubernetes could detect that the service isn't healthy.
CodePudding user response:
Using wait-for-it.sh
you can edit the timeout using the parameter -t
https://github.com/vishnubob/wait-for-it/blob/master/README.md