I am trying to create a script that would restart itself, a micro service (in my case its node-red
).
Here is my docker compose file:
docker-compose.yml
version: '2.1'
services:
wifi-connect:
build: ./wifi-connect
restart: always
network_mode: host
privileged: true
google-iot:
build: ./google-iot
volumes:
- 'app-data:/data'
restart: always
network_mode: host
depends_on:
- "wifi-connect"
ports:
- "8883:8883"
node-red:
build: ./node-red/node-red
volumes:
- 'app-data:/data'
restart: always
privileged: true
network_mode: host
depends_on:
- "google-iot"
volumes:
app-data:
I am using wait-for-it.sh in order to check if the previous container.
Here is an extract from the Dockerfile of the node-red microservice.
RUN chmod x ./wait-for-it/wait-for-it.sh
# server.js will run when container starts up on the device
CMD ["bash", "/usr/src/app/start.sh", "bash", "/usr/src/app/wait-for-it/wait-for-it.sh google-iot:8883 -- echo Google IoT Service is up and running"]
I have seen the inotify
.
Basically all I want is to restart the container node-red
after a file has been created within the app-data
volume which is mounted to the node-red
container as well under /data
folder path, the file path for e.g. would be: /data/myfile.txt
.
Please note that this file gets generated automatically to the google-iot
micro service but node-red
container needs that file and pretty often is the case that the node-red
container starts and /data/myfile.txt
file is not present.
CodePudding user response:
You can fix the issue with the race condition by using the long-syntax
of depends_on
where you can specify a health check. This will guarantee that your file is present when your node-red
service runs.
node-red:
build: ./node-red/node-red
volumes:
- 'app-data:/data'
restart: always
privileged: true
network_mode: host
depends_on:
google-iot:
condition: service_healthy
Then you can define a health-check
(see docs here) to see if your file is present in the volume. You can add the following to the service description for google-iot
service:
healthcheck:
test: ["CMD", "cat", "/data/myfile.txt"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
Feel free to tune the duration values as needed.
Does this fix your problem?
CodePudding user response:
It sounds like you're trying to delay one container's startup until another has produced the file you're looking for, or exit if it's not available.
You can write that logic into a shell script fairly straightforwardly. For example:
#!/bin/sh
# entrypoint.sh
# Wait for the server to be available
./wait-for-it/wait-for-it.sh google-iot:8883
if [ $? -ne 0 ]; then
echo 'google-iot container did not become available' >&2
exit 1
fi
# Wait for the file to be present
seconds=30
while [ $seconds -gt 0 ]; do
if [ -f /data/myfile.txt ]; then
break
fi
sleep 1
seconds=$(($seconds-1))
done
if [ $seconds -eq 0 ]; then
echo '/data/myfile.txt was not created' >&2
exit 1
fi
# Run the command passed to us as arguments
exec "$@"
In your Dockerfile, make this script be the ENTRYPOINT
. You must use JSON-array syntax in the ENTRYPOINT
line. Your CMD
can use any valid syntax. Note that we're running the wait-for-it
script in the entrypoint wrapper, so you don't need to include that in the CMD
. (And since the script is executable and begins with a "shebang" line #!/bin/sh
, we do not need to explicitly name an interpreter to run it.)
# Dockerfile
RUN chmod x entrypoint.sh wait-for-it/wait-for-it.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
CMD ["/usr/src/app/start.sh"]
The entrypoint wrapper has two checks, first that the google-iot
container eventually accepts TCP connections on port 8883 and a second that the file is created. If either of these cases fails the script exit 1
before it runs the CMD
. This will cause the container as a whole to exit with that status code (a restart: on-failure
will still restart it).
I also might consider whether some other approach to get the file might work, like using curl
to make an HTTP request to the other container. There are several practical issues with sharing Docker volumes (particularly around ownership, but also if an old copy of the file is still around from a previous run) and sharing files works especially badly in a clustered environment like Kubernetes.