Home > Blockchain >  Not able to create Multi containers in a single pod
Not able to create Multi containers in a single pod

Time:11-02

I am trying to create multiple containers in a single pod. I am facing following issue:

YAML file:

apiVersion: v1
kind: Pod
metadata:
  name: multi-containers
spec:

  restartPolicy: Never

  volumes:
  - name: multi-data
    emptyDir: {}

  containers:

  - name: nginx-multicontainerone
    image: nginx
    volumeMounts:
    - name: multi-data
      mountPath: /one

  - name: nginx-multicontainertwo
    image: nginx
    volumeMounts:
    - name: multi-data
      mountPath: /two

  - name: debian-container
    image: debian
    volumeMounts:
    - name: multi-data
      mountPath: /pod-data
    command: ["/bin/sh"]
    args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

Below is the output of describe command:

Kubectl describe pod multi-containers

Name:         multi-containers
Namespace:    default
Priority:     0
Node:         docker-desktop/192.168.65.4
Start Time:   Mon, 01 Nov 2021 20:07:08  0530
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.1.0.238
IPs:
  IP:  10.1.0.238
Containers:
  nginx-multicontainerone:
    Container ID:   docker://91561db271c29670880de55dda6a5f1724de42583d5712807f37dbc1597aa2ea
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 01 Nov 2021 20:07:28  0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /one from multi-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6hst (ro)
  nginx-multicontainertwo:
    Container ID:   docker://049e08584f49e7970fac5b1fbb60dbf67c9928944123336c62ec423a7c656239
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 01 Nov 2021 20:07:32  0530
      Finished:     Mon, 01 Nov 2021 20:07:34  0530
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /two from multi-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6hst (ro)
  debian-container:
    Container ID:  docker://8eb1b494e3da1672cef86bbe34af11f3e6f2e148ab56fbb969aca1f81205d5fa
    Image:         debian
    Image ID:      docker-pullable://debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
    Args:
      -c
      echo Hello from the debian container > /pod-data/index.html
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 01 Nov 2021 20:07:35  0530
      Finished:     Mon, 01 Nov 2021 20:07:35  0530
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /pod-data from multi-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6hst (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  multi-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-n6hst:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  66s   default-scheduler  Successfully assigned default/multi-containers to docker-desktop
  Normal  Pulling    65s   kubelet            Pulling image "nginx"
  Normal  Pulled     47s   kubelet            Successfully pulled image "nginx" in 18.2903658s
  Normal  Created    47s   kubelet            Created container nginx-multicontainerone
  Normal  Started    47s   kubelet            Started container nginx-multicontainerone
  Normal  Pulling    47s   kubelet            Pulling image "nginx"
  Normal  Pulled     44s   kubelet            Successfully pulled image "nginx" in 3.285316s
  Normal  Created    43s   kubelet            Created container nginx-multicontainertwo
  Normal  Started    43s   kubelet            Started container nginx-multicontainertwo
  Normal  Pulling    43s   kubelet            Pulling image "debian"
  Normal  Pulled     40s   kubelet            Successfully pulled image "debian" in 3.3076706s
  Normal  Created    40s   kubelet            Created container debian-container
  Normal  Started    40s   kubelet            Started container debian-container

Command: kubectl get pods Result: multi-containers 1/3
NotReady 0 14m

Main Goal of this exercise: I am using shared emptyDir volume and trying to access index.html in the containers "nginx-multicontainerone", "nginx-multicontainertwo"

CodePudding user response:

I ran your pod on Minikube and I was able to reproduce the issue. Because all containers in a pod share the same network interface, the first container acquires the default port (80) for Nginx in your pod. Subsequently, the other Nginx container fails when it attempts binding to the same port. To check the logs of the failing container, you can use the following command.

kubectl logs multi-containers nginx-multicontainertwo

When I ran it, I got the following output.

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/11/01 15:02:10 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2021/11/01 15:02:10 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2021/11/01 15:02:10 [notice] 1#1: try again to bind() after 500ms
2021/11/01 15:02:10 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2021/11/01 15:02:10 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2021/11/01 15:02:10 [notice] 1#1: try again to bind() after 500ms
2021/11/01 15:02:10 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2021/11/01 15:02:10 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2021/11/01 15:02:10 [notice] 1#1: try again to bind() after 500ms
2021/11/01 15:02:10 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2021/11/01 15:02:10 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2021/11/01 15:02:10 [notice] 1#1: try again to bind() after 500ms
2021/11/01 15:02:10 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2021/11/01 15:02:10 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2021/11/01 15:02:10 [notice] 1#1: try again to bind() after 500ms
2021/11/01 15:02:10 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()

Try assigning different port configurations to each Nginx instance.

  • Related