Home > database >  Docker Compose accessing another container via localhost
Docker Compose accessing another container via localhost

Time:03-02

I am trying to wrap my head around how to access other containers running as part of docker-compose services.

I have seen a lot of answers talking about containers accessible by their service name inside of other containers but also a lot of tutorials simply use the localhost with the exposed port.

So I am just trying to gain clarity on when to use which and why it works the way it works.

My sample application is: https://github.com/daniil/full-stack-js-docker-tutorial

In it, I have an NGINX server that maps both ui and api services together, but after the fact I realized that inside of my React container (3000:3000) I can actually just get access to Express container (5050:5050) by making an axios request to http://localhost:5050.

But at the same time if I try to connect to my MySQL container (9906:3306) via localhost, it doesn't work, I have to use db as a host, ie the container name.

Can someone please help me understand how it all works:

  • When can I use http://localhost:SERVICE_PORT, does it work inside React service because it's a browser request? ie: axios
  • How come I can't use http://api:5050 inside of React / axios request, is it because there is no host resolution for that?
  • How come I can't use http://localhost:9906|3306 to connect to my db service?
  • What is the purpose or benefit of NGINX reverse proxy to tie client and api together, if you actually don't need to have anything in between since localhost seems to work?
  • If containers are supposed to isolated, why is it then localhost:5050 from within my React container still sees the API server running on 5050 in a different container?
  • Other general rules that can help me understand how cross-container communication works

CodePudding user response:

The important detail here is that your React application isn't actually running in a container. It's running in the end user's browser, which is outside Docker space, and so doesn't have access to Docker's internal networking.

Say you have a typical application:

version: '3.8'
services:
  frontend: { ... }
  backend: { ... }
  database: { ... }
  proxy: { ... }

When one container calls another directly use the Compose service name and the default port of the service. The backend container might be configured with database:5432 as its database URL; an Nginx proxy might be configured to proxy_pass http://frontend:3000. ports: aren't required and are ignored if they're present. This works out-of-the-box without needing to specify networks: or container_name: and for most simple applications you can safely omit both options.

When the browser application calls a container use the host's DNS name or IP address and the first ports: number for that container.

When the browser application calls a container, and you're absolutely positive the browser is on the same host as the container, in this case only, you can use http://localhost:12345, again matching the first ports: number for the target container.

What is the purpose or benefit of NGINX reverse proxy to tie client and api together?

To avoid actually needing to know the host name. Say your Nginx configuration looks like

location / {
  proxy_pass http://frontend:3000 ;
}
location /api {
  proxy_pass http://backend:3000 ;
}

Then the browser application can just make an HTTP GET request to /api without needing to know the server's host name at all; it will use the same hast name as in the current page's URL.

This also means you can avoid publishing the other involved containers directly, or even possibly have multiple back-end containers for different purposes.

A complete but minimal setup could look like:

version: '3.8'
services:
  frontend:
    build: ./frontend
  backend:
    build: ./backend
    environment:
      PGHOST: database
      # PGUSER, PGPASSWORD
  database:
    image: 'postgres:14'
    volumes:
      - 'pgdata:/var/lib/postgresql/data'
    environment: {} # POSTGRES_USER, POSTGRES_PASSWORD
  proxy:
    image: 'nginx:1.21'
    volumes:
      - './default.conf:/etc/nginx/conf.d/default.conf'
    ports:
      - '12345:80'
volumes:
  pgdata:

With this setup and the Nginx configuration shown above, a browser call to http://localhost:12345 would retrieve the main application. If the browser application then requested /api/foo, that would be translated to http://localhost:12345/api/foo, which which be proxied to http://backend:3000/foo within Docker space.

  • Related