Using docker to deploy a uvicorn server to serve some tensorflow model. The end of the dockerfile looks like this.
# Start ASGI server
CMD ['./runserver.sh']
The runserver.sh
looks like this
#!/usr/bin/env bash
# encoding:utf-8
# To run in production with multiple slaves
uvicorn gateway:app --host=0.0.0.0 --workers 20 # Default port 8000
This is the command I am using hoping to start the container and it will stay on as a daemonic service
docker run --detach --publish 8000:8000 tensor_image
But the container starts, echoes its long identifier on the terminal, then just exits. How to keep it running in the background? Also, how to view the server log itself if I make uvicorn log its content to a local file inside the container?
Using Linux mint ulyana as my operating system if that is important.
CodePudding user response:
In order to detach a container you have to run for CMD statement in blocking mode.
In your case, which I cannot reproduce, you should run uvicorn
not as a daemon, but as a blocking application.
CodePudding user response:
You could modify your runserver.sh
script and run uvicorn
with exec
:
#!/usr/bin/env bash
# encoding:utf-8
# To run in production with multiple slaves
exec uvicorn gateway:app --host=0.0.0.0 --workers 20 # Default port 8000
exec
triggers the process running the script to be replaced with the command given, instead of starting it as a new process.
docker
associates the life of the container to the first process that runs on it. In cases like this one, where the start up process, your shell script, is not the main process of the container, uvicorn
running as daemon, you need to make sure that the main process takes the place of that first process to ensure that the container is not terminated early by docker
as in your use case.