Home > Software engineering >  How to run only specific command as root and other commands with default user in docker-compose
How to run only specific command as root and other commands with default user in docker-compose

Time:11-20

This is my Dockerfile.

FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup

This is how docker-compose.yml looks like.

api_server:
    build:
      context: .
      target: prod-env
    image: company/server
    volumes:
      - ./shared/model_server/models:/models
      - ./static/images:/images
    ports:
      - 8200:8200
    command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"

I want to add permissions read, write and execute permissions on shared directories.

And also need to run couple of other coommands as root.

So I have to execute this command with root every time after image is built.

docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a rwx models; chmod -R a rwx /images"

Now, I want docker-compose to execute these lines.

But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?

Option that I've been thinking: Install sudo command from Dockerfile and use sudo

Is there any better way ?

CodePudding user response:

In docker-compose.yml create another service using same image and volumes.

Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.

api_server:
    build:
      context: .
      target: prod-env
    image: company/server
    volumes:
      - ./shared/model_server/models:/models
      - ./static/images:/images
    ports:
      - 8200:8200
    command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
    # This make sure that startup order is correct and api_server_decorator service is starting first
    depends_on:
      - api_server_decorator
api_server_decorator:
    build:
      context: .
      target: prod-env
    image: company/server
    volumes:
      - ./shared/model_server/models:/models
      - ./static/images:/images
    # No ports needed - it is only decorator
    # Overriding USER with root:root
    user: "root:root"
    # Overriding command
    command: python copy_stuffs.py; chmod -R a rwx models; chmod -R a rwx /images

There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.

CodePudding user response:

In my eyes the approach giving a container root rights is quite hacky and dangerous. If you want to e.g. remove the files written by container you need root rights on host as well. If you want to allow a container to access files on host filesystem just run the container with appropriate user.

api_server:
   user: my_docker_user:my_docker_group

then give on host the rights to that group

sudo chown -R my_docker_user:my_docker_group models

CodePudding user response:

You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image

COPY shared/model_server/models /models
COPY static/images /images

Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.

In the Compose setup, do not mount host content over these directories either. You should just have:

services:
  api_server:
    build: .  # use the same image in all environments
    image: company/server
    ports:
      - 8200:8200
    # no volumes:, do not override the image's command:

Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)

docker-compose build api_server

and then do a relatively quick restart, running a new container on the updated image

docker-compose up -d
  • Related