Home > Software design >  runc create failed: unable to start container process: exec: no such file or directory
runc create failed: unable to start container process: exec: no such file or directory

Time:11-14

I am trying to deploy a containerized webapp on Openshift by a helm chart. When I deploy the app I get the following error in the pod logs -

Events:
  Type     Reason          Age                   From               Message

  Normal   Pulled          14m                   kubelet            Successfully pulled image "<private-gitlab-registry-docker-image>:latest" in 3.787555091s
  Warning  Failed          14m                   kubelet            Error: container create failed: time="2022-11-11T13:51:47Z" level=error msg="runc create failed: unable to start container process: exec: \"python3 src/myapp.py\": stat python3 src/myapp.py: no such file or directory"

Here is the dockerfile -

FROM <private-gitlab-registry-centos-image>

ADD files/etc/yum.repos.d/* /etc/yum.repos.d/

RUN yum update -y

WORKDIR /app

RUN yum install -y python-keystoneclient python3-flask python3-keystoneauth1 python3-redis python3-werkzeug python3-pip python3-keystoneclient
RUN pip install flask-caching

COPY . /app

ENTRYPOINT [ "python3" ]

CMD [ "src/myapp.py" ]

When I manually try to run this docker image, it works just fine. But when I deploy it on Kubernetes, kubelet throws the above error.

Here is my deployment.yaml -

---
apiVersion: apps/v1
kind: Deployment                 # Type of Kubernetes resource
metadata:
  name: myapp             # Unique name of the Kubernetes resource
spec:
  replicas: 1                    # Number of pods to run at any given time
  selector:
    matchLabels:
      app: myapp          # This deployment applies to any Pods matching the specified label
  template:                      # This deployment will create a set of pods using the configurations in this template
    metadata:
      labels:                    # The labels that will be applied to all of the pods in this deployment
        app: myapp 
    spec:
      containers:
      - name: myapp
        image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        imagePullPolicy: {{ .Values.image.imagePullPolicy }}
          {{- include "myapp.command" . | nindent 8 }}
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
          - containerPort: 8080  # Should match the port number that the Go application listens on    
        env:                     # Environment variables passed to the container
          - name: REDIS_HOST
            value: redis-master
          - name: REDIS_PORT
            value: "6379"  

And values.yaml -

image:
  repository: gitlab-registry.cern.ch/batch-team/hepspec-query/hepspec-query
  tag: latest
  imagePullSecret: {}
  imagePullPolicy: Always
  command: [ "python3" , "src/hepspecapp.py" ]
  args: {}

CodePudding user response:

The simplest thing to do here is to remove the part of the Helm chart here that provides command:, and overrides the image's ENTRYPOINT. The image already knows what command it's supposed to run (if oddly split across two Docker directives) and you don't need to specify it when you run the image. Similarly, it'd be odd to reconfigure this at deploy time to run some other command without substantially rearchitecting the container setup.

containers:
  - name: myapp
    image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
    imagePullPolicy: {{ .Values.image.imagePullPolicy }}
    # but you don't need to specify command: or make it configurable

If it's important to you to make this configurable, you're probably running into a syntax problem with the default Go serialization of lists out of templates. If you run helm template over your chart, it'll probably print something like

containers:
  - name: myapp
    image: gitlab-registry.cern.ch/batch-team/hepspec-query/hepspec-query:latest
    imagePullPolicy: Always
    command: [python3 src/hepspecapp.py]

That is, .Values.command is a list, and it's parsed and stored internally as a list, and you're getting a default serialization that's not what's in the values.yaml file. It turns out that is valid YAML, by coincidence, but now it's a list containing a single string that contains an embedded space.

Helm contains a lightly-documented toYaml function that can convert an arbitrary structure back to valid YAML. This is indented starting at the first column so you need to make sure to appropriately indent the result.

containers:
  - name: myapp
{{- if .Values.command }}
    command:
{{ .Values.command | toYaml | indent 6 }}
{{- end }}
{{- if .Values.args }}
    args:
{{ .Values.args | toYaml | indent 6 }}
{{- end }}
  • Related