Home > Mobile >  Not able to run linux command in background from dockerfile?
Not able to run linux command in background from dockerfile?

Time:12-15

Here's my docker file,

FROM ubuntu:20.04

ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt upgrade -y
RUN apt install -y -q software-properties-common
RUN apt install -y -q build-essential python3-pip python3-dev
RUN apt-get install -y gcc make apt-transport-https ca-certificates build-essential
RUN apt-get install -y curl autoconf automake libtool pkg-config git libreoffice wget
RUN apt-get install -y g  
RUN apt-get install -y autoconf automake libtool
RUN apt-get install -y pkg-config
RUN apt-get install -y libpng-dev
RUN apt-get install -y libjpeg8-dev
RUN apt-get install -y libtiff5-dev
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libleptonica-dev
RUN apt-get install -y libicu-dev libpango1.0-dev libcairo2-dev

# python dependencies
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn uvloop httptools dvc[s3]
RUN pip3 install nltk
RUN python3 -c "import nltk;nltk.download('stopwords')" 

# copy required files
RUN bash -c 'mkdir -p /app/{app,models,requirements}'
COPY ./config.yaml /app
COPY ./models /app/models
COPY ./requirements /app/requirements
COPY ./app /app/app


# tensorflow serving for models
RUN echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \
    curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
RUN apt-get update && apt-get install tensorflow-model-server
RUN tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/app/models/model.conf --model_base_path=/app/models &

ENTRYPOINT /usr/local/bin/gunicorn \
    -b 0.0.0.0:80 \
    -w 1 \
    -k uvicorn.workers.UvicornWorker app.main:app \
    --timeout 120 \
    --chdir /app \
    --log-level 'info' \
    --error-logfile '-'\
    --access-logfile '-'

No matter what I do this below line is not executing while running docker image,

RUN tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/app/models/model.conf --model_base_path=/app/models &

Why is that?how can I run that above command in background and go to entrypoint in docker file. Any help is appreciated.

CodePudding user response:

You should be able to create a separate Dockerfile that only runs the TensorFlow server:

FROM ubuntu:20.04

# Install the server
RUN echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list \
 && curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add - \
 && apt-get update \
 && DEBIAN_FRONTEND=noninteractive \
    apt-get install --no-install-recommends --assume-yes \
      tensorflow-model-server

# Copy our local models into the image
COPY ./models /models

# Make the server be the main container command
CMD tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/models/model.conf --model_base_path=/models

Then you can remove the similar lines from your main application's Dockerfile.

Having done this, you can set up a Docker Compose setup that launches both containers:

version: '3.8'
services:
  application:
    build: .
    ports: ['8000:80']
    environment:
      - TENSORFLOW_URL=http://tf:8500
  tf:
    build:
      context: .
      dockerfile: Dockerfile.tensorflow
    # ports: ['8500:8500', '8501:8501']

Your application will need to know to look for that os.environ['TENSORFLOW_URL']. Now you have two containers, and each has its CMD to run a single foreground process.

At a lower level, a Docker image doesn't include any running processes; think of it like a tar file plus a command line to run. Anything you start in the background in a RUN command will get terminated as soon as that RUN command completes.

CodePudding user response:

Why is that?

Because your docker container is configured to run /usr/local/bin/gunicorn, as defined by the ENTRYPOINT instruction.

how can I run that above command in background and go to entrypoint in docker file.

The standard way to do this is to write a wrapper script which executes all programs you need. So for this example, something like run.sh:

#!/bin/bash

# Start tensorflow server
tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/app/models/model.conf --model_base_path=/app/models &

# Start gunicorn
/usr/local/bin/gunicorn \
    -b 0.0.0.0:80 \
    -w 1 \
    -k uvicorn.workers.UvicornWorker app.main:app \
    --timeout 120 \
    --chdir /app \
    --log-level 'info' \
    --error-logfile '-'\
    --access-logfile '-'

Then in the Dockerfile:

ADD run.sh /usr/local/bin/run.sh
RUN chmod x /usr/local/bin/run.sh
ENTRYPOINT /usr/local/bin/run.sh
  • Related