Home > Back-end >  NGINX 94134#94134 upstream prematurely closed connection while reading response header from upstream
NGINX 94134#94134 upstream prematurely closed connection while reading response header from upstream

Time:08-08

When a user in my Django application installed on Ubuntu from DigitalOcean (v18 ) selects multiple files to send on server, they see below error:

502 Bad Gateway
nginx/1.18.0 (Ubuntu)

I checked the application logs showing the following error:

2022/08/05 11:13:38 [error] 94134#94134: *108 upstream prematurely closed connection while reading response header from upstream, client: 31**.***,***.23, server: 15**.***.***2, request: "POST /profil/galrtia/apartment-rent/1/ HTTP/1.1", upstream: "http://unix:/home/app/run/gunicorn.sock:/profil/galrtia/apartment-rent/1/", host: "1***.***.***2", referrer: "http://15**8.***.***82/profil/galrtia/apartment-rent/1/"

The error occurs after about 3-4 seconds of uploading files to the server. I tried to increase the limits in my NGINX configuration of files and timeout but no results (I still see the error). My configuration looks like below:

upstream app_server {
    server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}

server {
    listen 80;

    # add here the ip address of your server
    # or a domain pointing to that ip (like example.com or www.example.com)
    server_name 1**.**.***.**2;

    keepalive_timeout 10000;
    client_max_body_size 10G;

    access_log /home/app/logs/nginx-access.log;
    error_log /home/app/logs/nginx-error.log;

    # Compression config
    gzip on;
    gzip_min_length 1000;
    gzip_buffers 4 32k;
    gzip_proxied any;
    gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
    gzip_vary on;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    location /static/ {
        alias /home/app/static/;
    }

    location /media/ {
        alias /home/app/app/app/media/;    }

    # checks for static file, if not found proxy to app
    location / {
        try_files $uri @proxy_to_app;
    }

    location @proxy_to_app {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;
      proxy_pass http://app_server;
    }
}

If the user sends 2-3 photos, everything works fine. The problem appears with about 5 photos or more. The application is a test application, which is why I have set such large limits. How can I avoid the NGINX bug? How to solve my problem so that it does not occur when transferring a large number of files?

CodePudding user response:

I find upstream useless and causing problems, not sure if it would solve your problem but give it a try, delete upstream at all, and your @proxy_to_app, and try it in old way:

server {
    listen 80;

    # add here the ip address of your server
    # or a domain pointing to that ip (like example.com or www.example.com)
    server_name 1**.**.***.**2;

    keepalive_timeout 10000;
    client_max_body_size 10G;

    access_log /home/app/logs/nginx-access.log;
    error_log /home/app/logs/nginx-error.log;

    # Compression config
    gzip on;
    gzip_min_length 1000;
    gzip_buffers 4 32k;
    gzip_proxied any;
    gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
    gzip_vary on;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    location /static/ {
        alias /home/app/static/;
    }

    location /media/ {
        alias /home/app/app/app/media/;    
    }


    location / {
      proxy_pass http://unix:/home/app/run/gunicorn.sock;
    }
}

also your error might be that in your upstream you used unix without http, u did:

server unix:/home/app/run/gunicorn.sock fail_timeout=0;

and this might be needed:

server http://unix:/home/app/run/gunicorn.sock;

CodePudding user response:

Changing the droplet size on DigitalOcean solved my problem. My CPU was not exceeded and the number of files transferred in relation to the received drop size was also within the limit before the change. I don't know why that solved the problem.

  • Related