Home > Net >  How to Nginx reverse Proxy to a locally hosted (on a server with ssh access) docker-compose
How to Nginx reverse Proxy to a locally hosted (on a server with ssh access) docker-compose

Time:12-17

I cant quite put my finger on how to get this working in this state and id love not to have to redo everything a different way if possible.

I have a digital ocean droplet running an NGINX server as well as a docker-compose with a client, server and db image running inside.

What i envisioned doing was securing and routing traffic to the server through the NGINX which would proxy to the client app exposed from the compose.

so (internet) -> DNS(NGINX) -> Client -> Server -> DB

Now this works current when connecting to the ip:port exposed in the docker because ive opened that up using uwp on the droplet, great its http not https.

I can proxy from NGINX to the ip:port, kinda great, as i understand this its going from the internet, to the nginx, back out to the internet, to the client app, but its working.

Now i secure and set up a DNS and route it through NGINX and am getting an "Invalid Host Header" response.

Its an angular app so i can disable the host header check and it will probably be groovy but my next step would to be closing off the port ive exposed so that all traffic would have to go through the NGINX proxy, but i think even the proxy is using the exposed port on the droplet to route traffic there.

The ASK:

Can i route traffic from the NGINX to a docker-compose address within the server instead of using the servers ip and port combo to access the client site so that i can close the port ive exposed?

Alternatively do i just need to run an NGINX container within the compose so i can use compose networking to manage the traffic? id prefer not this as i will have to change things on the server and its feeling a little fragile to me.

TLDR:

  • Working
    • Internet -> http://ip:port(NGINX) -> http://ip:port docker-compose (client port) == App Served!
    • Internet -> http://ip:port docker-compose (client port) == App Served
  • Not working
    • Internet -> https://DNS:port:80/443(NGINX) -> http://ip:port docker-compose (client port) == "Invalid Host Header"
  • Want to work
    • Internet -> https://DNS:port:80/443(NGINX) -> local compose not over http == App Served

sites-enabled # this is where ive done most the manual configuration this conf and many other nginx files are inside of etc/nginx

server {

  listen 80;
  server_name <dns>.net;

  index index.html index.htm index.nginx-debian.html;
  location / {
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Host $host;
    proxy_set_header Connection 'upgrade';
    proxy_http_version 1.1;
    proxy_cache_bypass $http_upgrade;
    proxy_pass http://<ip>:<port>;
  }
  
  listen 443 ssl;
  ssl_certificate <server filepath>;
  ssl_certificate_key <filepath>;
  include <letsencrypt nginx conf path>;
  ssl_dhparam <letsencrypt pem path>;

}

server {
  if ($host = <dns>.net) {
    return 301 https://$host$request_uri;
  }
    listen 80;
    server_name <dns>.net;
    listen 443 ssl default_server;
    server_name _;
  return 404; 
  
}

docker-compose

networks:
  client:
  server:

services:
  client:
    container_name: clientName
    image: "repoImage"
    ports: 
       - 1:1
    environment:
       - VIRTUAL_HOST=SERVERIPADDRESS
       - LETSENCRYPT_HOST=SERVERIPADDRESS
    networks:
       - client
    restart: always
 
  server:
    container_name: serverName
    networks:
      - client
      - server
    image: "repoImage"
    command npm run startProd
    ports:
      - 2:2
    restart: always
    env_file:
      - <envfilepath>
  
  database:
    container_name: dbInstance
    networks:
      - server
    restart: always
    image: dbimage
    ports:
      - 3:3
      - 4:4
    volumes:
      - many
    environment
      - envVars
    

CodePudding user response:

It tends to simplify things if you use docker-compose for the entire system, especially if it's all running on the same machine. That way, you can leverage Docker's networking and minimize the number of ports you need to expose.

For example, you could set it up so that

  • NGINX is the "front door" and direct traffic to both client and server based on hostname or path
  • Services reference each other with the service name port

docker-compose.yml

networks:
  db-net:
  proxy-net:

services:
  nginx:
    networks:
      proxy-net:
    ports:
      # These are the only ports that will be open on your machine
      - 80:80
      - 443:443

  database:
    networks:
      db-net:
    # no ports exposed
    command: <run on port 3306>

  server:
    networks:
      db-net:
      proxy-net:
    # no ports exposed
    command: <run on port 3000>
    environment:
      - DATABASE_URL=database:3306

  client:
    networks:
      proxy-net:
    # no ports exposed
    command: <run on port 3000>

nginx.conf

server {
  ...
  location /client {
    ...
    proxy_pass http://client:3000;
  }
  ...
  location /server {
    ...
    proxy_pass http://server:3000;
  }
  ...
}

Let me know if you have any questions or if I misunderstood your question.

  • Related