Home > Enterprise >  changing python version in docker
changing python version in docker

Time:06-22

I am trying to have this repo on docker: https://github.com/facebookresearch/detectron2/tree/main/docker

but when I want to docker compose it, I receive this error:

ERROR: Package 'detectron2' requires a different Python: 3.6.9 not in '>=3.7'

The default version of the python I am using is 3.10 but I don't know why through docker it's trying to run it on python 3.6.9.

Is there a way for me to change it to a higher version of python while running the following dockerfile?

FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04
# use an older system (18.04) to avoid opencv incompatibility (issue#3524)

ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
    python3-opencv ca-certificates python3-dev git wget sudo ninja-build
RUN ln -sv /usr/bin/python3 /usr/bin/python

# create a non-root user
ARG USER_ID=1000
RUN useradd -m --no-log-init --system  --uid ${USER_ID} appuser -g sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER appuser
WORKDIR /home/appuser

ENV PATH="/home/appuser/.local/bin:${PATH}"
RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \
    python3 get-pip.py --user && \
    rm get-pip.py

# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install --user tensorboard cmake   # cmake from apt-get is too old
RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html

RUN pip install --user 'git https://github.com/facebookresearch/fvcore'
# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler Tesla;Maxwell;Maxwell Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"

RUN pip install --user -e detectron2_repo

# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo

# run detectron2 under user "appuser":
# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg
# python3 demo/demo.py  \
    #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
    #--input input.jpg --output outputs/ \
    #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl

CodePudding user response:

This is an open issue with facebookresearch/detectron2. The developers updated the base Python requirement from 3.6 to 3.7 with commit 5934a14 last week but didn't modify the Dockerfile.

Changing your FROM line to this might resolve it, but I haven't verified this (and the image mentioned in the issue (pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel) might work as well.

FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu20.04

CodePudding user response:

You can use pyenv: https://github.com/pyenv/pyenv

Just google docker pyenv container, will give you some entries like: https://gist.github.com/jprjr/7667947

If you follow the gist you can see how it has been updated, very easy to update to latest python that pyenv support. anything since 2.2 to 3.11

Only drawback is that container becomes quite large because it holds all glibc development tools and libraries to compile cpython, but often it helps in case you need modules without wheels and need to compile because it is already there.

  • Related