I'm running nginx on docker and it is currently serving a webpage with SSL on let's say https://example.com I've now created another set of containers that provide their own web server and it is available on port 8080 locally, and I want to be able to reach it in https://example.com/new_service
I've tried adding a simple proxy_pass of the /new_service/ location but I get a 502 Bad Gateway
error and the nginx logs show the following:
2022/04/12 22:27:12 [error] 32#32: *19 connect() failed (111: Connection refused) while connecting to upstream, client:8.8.8.8, server: example.com, request: "GET /new_service HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2022/04/12 22:27:12 [warn] 32#32: *19 upstream server temporarily disabled while connecting to upstream, client: 8.8.8.8, server: example.com, request: "GET /new_service/ HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2022/04/12 22:27:12 [error] 32#32: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 8.8.8.8, server: example.com, request: "GET /new_service/ HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2022/04/12 22:27:12 [warn] 32#32: *19 upstream server temporarily disabled while connecting to upstream, client: 8.8.8.8, server: example.com, request: "GET /new_service/ HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
8.8.8.8 - - [12/Apr/2022:22:27:12 0000] "GET /new_service/ HTTP/1.1" 502 157 "-" "My Browser" "-"
My current configuration is:
server {
listen 443;
server_name example.com;
ssl_certificate /etc/nginx/certs/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/example.com/privkey.pem;
root /var/www/html/;
client_max_body_size 1000M; # set max upload size
fastcgi_buffers 64 4K;
index index.php;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
add_header Strict-Transport-Security "max-age=15552000; includeSubdomains; ";
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
deny all;
}
location ~ /(conf|bin|inc)/ {
deny all;
}
location ~ /data/ {
internal;
}
location /new_service/ {
rewrite ^/new_service/?(.*) /$1 break;
proxy_pass http://localhost:8080/;
}
location / {
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
rewrite ^(/core/doc/[^\/] /)$ $1/index.html;
try_files $uri $uri/ index.php;
}
location ~ ^(. ?\.php)(/.*)?$ {
try_files $1 = 404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$1;
fastcgi_param PATH_INFO $2;
fastcgi_param HTTPS on;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass php:9000;
# Or use unix-socket with 'fastcgi_pass unix:/var/run/php5-fpm.sock;'
#fastcgi_pass unix:/run/php/php7.3-fpm.sock;
}
location ~* ^. \.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
# Optional: Don't log access to assets
access_log off;
}
}
I imagine it must be common practice to use nginx to direct several locations to different local containers, but I haven't been able to find good guidance on this. Any insight is greatly appreciated.
CodePudding user response:
It sounds like to me that the new docker container isn't allowing you through it's firewall or you haven't passed the ports through to the host
CodePudding user response:
Please give docker config for a tailored answer.
With guessing: If your containers does not use host network but bridge network (which is default), localhost points to the localhost of the nginx container and not your host system. Therefore use proxy_pass http:\\DNS-NAME:8080\
or proxy_pass http:\\DOCKER-CONTAINER-IP:8080\
to reach the container inside docker network. Use docker inspect CONTAINER
to determine this:
...
"NetworkSettings": {
"Networks": {
"NAME": {
"Aliases": [
"c4675dda79be"
],
"IPAddress": "172.18.0.2",
}
Aliases are the DNS-Names and is UID by default, further can be set by
docker run --net-alias
docker network connect --alias
docker-compose -> service name
...