Hello everyone I've been trying to execute our django app on amazon aws ec2 instance. Everything works fine except for requests longer than 60s. Those requests automatically get 504 Gateway Time-out
. I configured all needed ports in Security Groups for my EC2 instance.
We are using nginx and my nginx.conf looks like:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 3600s;
client_max_body_size 500M;
client_body_timeout 86400s;
client_header_timeout 86400s;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
proxy_connect_timeout 86400s;
proxy_send_timeout 86400s;
proxy_read_timeout 86400s;
send_timeout 86400s;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
I tried to play with keepalive_timeout
as many posts suggest but it's not working. There are also a lot of posts mentioning load balancer configuration but I'm not even using load balancer so this should not be related to it.
How do I manage my instance to process requests longer than 60s?
UPDATE:
Solved(as @jaygooby suggested) by searching server { }
block inside django container. I modified /etc/nginx/sites-enabled/mydjango
and added proxy_read_timeout 60m;
to my ````location / { }``` block.
CodePudding user response:
I'd suggest tidying up your config first; you're setting some values without understanding what they're really for, then including another config file, then setting some more, and then including more config.
Change it so you do your includes and then override with the values you want.
I've annotated the various settings so you can see what they're for; these are so often cargo-cult copied in the hope they work. The only one that realistically needs to be anything like 86400s
(24 hours!) is proxy_read_timeout
:
I suspect what's happening is that one of the conf files in /etc/nginx/conf.d/*.conf
has a 60s
(or 1m
) setting for proxy_read_timeout
and even though you were giving this a much larger value, you call the include
again which will overwrite your setting.
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 500M; # Don't allow uploads larger than 500MB
client_body_timeout 86400s; # Defines a timeout for reading client
# request body. The timeout is set only
# for a period between two successive read
# operations, not for the transmission of
# the whole request body
client_header_timeout 86400s; # Defines a timeout for reading client
# request header; ie. the client's initial
# HEAD, GET or POST
proxy_connect_timeout 86400s; # Time to *open* a connection to the proxy
# before we give up
proxy_send_timeout 86400s; # Timeout for transmitting a request *to*
# the proxied server
proxy_read_timeout 86400s; # Timeout for reading a response from the
# proxied server - did it send back
# anything before this has expired
send_timeout 86400s; # Timeout for sending a response to the
# requesting client - note this isn't
# proxy_send_timeout, but the time between
# two successive write operations to the
# requesting client (ie. browser)
}