Home > OS >  Have one container access another with helm/Docker
Have one container access another with helm/Docker

Time:12-07

The context

Let me know if I've gone down a rabbit hole here.

I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.

I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but the Internet says it's a best practice to have different containers each be responsible for only running one service - so one container to run nginx and another to run the microservice.

The problem

That's fine, but then say the nginx server's HTML pages need to know what to send a POST request to in the backend - how can the HTML pages know what IP to hit for the backend's Docker container? Articles like this one come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.

CodePudding user response:

You mention that your frontend is based on Nginx.

Accordingly,Frontend must hit the public URL of backend.

Thus, backend must be exposed by choosing the service type, whether:

  • NodePort -> Frontend will communicate to backend with http://<any-node-ip>:<node-port>
  • or LoadBalancer -> Frontend will communicate to backend with the http://loadbalancer-external-IP:service-port of the service.
  • or, keep it ClusterIP, but add Ingress resource on top of it -> Frontend will communicate to backend with its ingress host http://ingress.host.com.

We recommended the last way, but it requires to have ingress controller.

Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed

CodePudding user response:

You may try to setup two containers in one pod and then communicate between containers via localhost (but on different ports!). Good example is here - Kubernetes multi-container pods and container communication.

Another option is to create two separate deployments and for each create service. Instead of using IP addresses (won't be the same for every re-deployment of your app) use a DNS name for connecting to them.

Example - two NGINX services communication.

First create two NGINX deplyoments:

kubectl create deployment nginx-one --image=nginx --replicas=3
kubectl create deployment nginx-two --image=nginx --replicas=3

Let's expose them using the kubectl expose command. It's the same if I had created a service from a yaml file:

kubectl expose deployment nginx-one --name=my-service-one --port=80
kubectl expose deployment nginx-two --name=my-service-two --port=80

Now let's check services - as you can see both of them are ClusterIP type:

user@shell:~$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes       ClusterIP   10.36.0.1      <none>        443/TCP   66d
my-service-one   ClusterIP   10.36.6.59     <none>        80/TCP    60s
my-service-two   ClusterIP   10.36.15.120   <none>        80/TCP    59s

I will exec into pod from nginx-one deployment and curl the second service:

user@shell:~$ kubectl exec -it nginx-one-5869965455-44cwm -- sh 
# curl my-service-two
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

If you have problems, make sure you have a proper CNI plugin installed for your cluster - also check this article - Cluster Networking for more details.

Also check these:

  • Related