Home > Blockchain >  Celery with docker compose
Celery with docker compose

Time:01-06

I have docker compose for my application. celery is one of the service.

  • Command celery worker is working but
  • Command celery multi is not working.
  celery:
    container_name: celery_application
    build: 
      context: .
      dockerfile: deploy/Dockerfile
    # restart: always
    networks:
      - internal_network
    env_file:
      - deploy/.common.env
    # command: ["celery", "--app=tasks", "worker", "--queues=upload_queue", "--pool=prefork", "--hostname=celery_worker_upload_queue", "--concurrency=1", "--loglevel=INFO", "--statedb=/external/celery/worker.state"]  # This is working
     command: ["celery", "-A", "tasks", "multi", "start", "default", "model", "upload", "--pidfile=/external/celery/%n.pid", "--logfile=/external/celery/%n%I.log", "--loglevel=INFO", "--concurrency=1", "-Q:default", "default_queue", "-Q:model", "model_queue", "-Q:upload", "upload_queue"]  # This is not working
    # tty: true
    # stdin_open: true
    depends_on:
      - redis
      - db
      - pgadmin
      - web
    volumes:      
      - my_volume:/external

Getting this output

celery | celery multi v5.2.7 (dawn-chorus)
celery | > Starting nodes...
celery |     > default@be788ec5974d: 
celery | OK
celery |     > model@be788ec5974d:
celery | OK
celery |     > upload@be788ec5974d:
celery exited with code 0

All services gets up except celery which exited with code 0. What I am missing when using celery multi? Please suggest.

CodePudding user response:

The celery multi command does not wait for celery worker to done but it start multiple celery workers in background and then exit. Unfortunately, the termination of foreground process causes child workers to be terminated too in docker container environment.

It's not a good practice to use celery multi with docker like this because any issue of a single worker may not be reflect to container console and your worker may be crashed or dead or go into forever loop inside the container without any signal for management or monitoring. With the single worker command, the exit code will be returned to docker container and it will restart the service in case of termination of the worker.

If you still really need to use the celery multi like this. You can try to use bash to append another forever loop command to prevent the container from exit:

command: ["bash", "-c", "celery -A tasks multi start default model upload --pidfile=/external/celery/%n.pid --logfile=/external/celery/%n%I.log --loglevel=INFO --concurrency=1 -Q:default default_queue -Q:model model_queue -Q:upload upload_queue; tail -f /dev/null"]

The tail -f /dev/null will keep your container there forever no matter whether the celery worker is running or not. Of course, your container must have bash installed.

My assumption is that you would like to containerize all celery workers into the single container for ease to use purpose. If so? You can try https://github.com/just-containers/s6-overlay instead of celery multi. The S6 Overlay can monitor your celery worker, restart it when exited, and also provide some process supervisor utilities like celery multi, but it's designed for this purpose.

  • Related