Home > Enterprise >  Nginx Reverse Proxy with Port Forwarding Not Working
Nginx Reverse Proxy with Port Forwarding Not Working

Time:10-31

I'm having trouble accessing a locally-hosted website. The idea is that a site hosted in a docker container and sitting behind an Nginx proxy should be accessible from the internet.

  • I have a hostname with NoIP, let's call it stuff.ddns.net.
  • I've set up IP updates to NoIP DNS servers (i.e., stuff.ddns.net always points to my router).
  • My router forwards ports 80 and 443 to a static IP on my local network (a Linux machine).
  • I'm hosting an Apache Airflow web server in a Docker container on aforementioned Linux machine, and I've set AIRFLOW__WEBSERVER__BASE_URL: 'https://stuff.ddns.net/airflow'.

When I try accessing stuff.ddns.net/airflow in my web browser, I get Safari can't open the page "stuff.ddns.net/airflow" because Safari can't connect to the server "stuff.ddns.net".

Here is my nginx.conf:

# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
events { 
    worker_connections 1024;
}

http {
    map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
    }

    upstream airflow {
        server localhost:8080;
    }

    server {
        listen [::]:80;
        server_name stuff.ddns.net;
        return 302 https://$host$request_uri;
    }

    server {
        listen [::]:443 ssl;

        server_name stuff.ddns.net;

        ssl_certificate /run/secrets/stuff_ddns_net_pem_chain;
        ssl_certificate_key /run/secrets/stuff_ddns_net_key;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_dhparam /run/secrets/dhparam.pem;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

        location /airflow/ {
            proxy_pass http://airflow;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }
}

Ideas?

EDIT: A truncated (i.e., other Airflow components left out) docker-compose.yml for full clarity of the setup:

version: '3.7'

x-airflow-common:
  &airflow-common
  image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.4.0}
  # build: .
  environment:
    &airflow-common-env
    AIRFLOW__CORE__EXECUTOR: CeleryExecutor
    AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD: 'cat /run/secrets/sql_alchemy_conn'
    AIRFLOW__CELERY__RESULT_BACKEND_CMD: 'cat /run/secrets/result_backend'
    AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
    AIRFLOW__CORE__FERNET_KEY: ''
    AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
    AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
    AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
    AIRFLOW__WEBSERVER__BASE_URL: 'https://stuff.ddns.net/airflow'
    AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX: 'True'
    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
  volumes:
    - ./storage/airflow/dags:/opt/airflow/dags
    - ./storage/airflow/logs:/opt/airflow/logs
    - ./storage/airflow/plugins:/opt/airflow/plugins
  user: "${AIRFLOW_UID:-1000}:0"
  secrets:
    - sql_alchemy_conn
    - result_backend
    - machine_pass
  depends_on:
    &airflow-common-depends-on
    redis:
      condition: service_healthy
    postgres:
      condition: service_healthy

x-stuff-common:
  &stuff-common
  restart: unless-stopped
  networks:
    - ${DOCKER_NETWORK:-stuff}

services:
  nginx:
    <<: *stuff-common
    container_name: stuff-nginx
    image: nginxproxy/nginx-proxy:alpine
    hostname: nginx
    ports:
      - ${PORT_NGINX:-80}:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./config/nginx.conf:/etc/nginx/nginx.conf:ro
    secrets:
      - stuff_ddns_net_pem_chain
      - stuff_ddns_net_key
      - dhparam.pem

  airflow-webserver:
    <<: *stuff-common
    <<: *airflow-common
    container_name: stuff-airflow-webserver
    command: webserver
    ports:
      - ${PORT_UI_AIRFLOW:-8080}:8080
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:${PORT_UI_AIRFLOW:-8080}/airflow/health"]
      interval: 10s
      timeout: 10s
      retries: 5
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

networks:
  stuff:
    name: ${DOCKER_NETWORK:-stuff}

secrets:
  ... <truncated> 

CodePudding user response:

The solution here was threefold:

  • Docker container uses the same bridge network as all the other containers.
  • In the nginx.conf upstream declaration, replace localhost with the LAN IP address of the Docker host (this works for me since I'm using a statically-assigned address).
  • Add listen <PORT>; above the listen [::]:<PORT>; directives in nginx.conf (I'm not sure what this does, but everything breaks without this).

Here is what the top part of the Nginx.conf looks like now:

    upstream airflow {
        server 192.168.50.165:8080;
    }

    server {
        listen 80;
        listen [::]:80;
        server_name stuff.ddns.net;
        return 302 https://$host$request_uri;
    }

    server {
        listen 443 ssl;
        listen [::]:443 ssl;
        server_name stuff.ddns.net;

    .....
  • Related