I had a recent outage on a Nginx/Rails application server. It turned out we were being bombarded by requests to a particular URL that takes a few seconds to load. It appears that a user was continually refreshing that page for a number of minutes - my guess is they accidentally put some object on their keyboard in such a way as to trigger a constant stream of browser refreshes.
Regardless of the cause, I need to put protection in place against this kind of problem, and note that this is not static content - it's dynamic, user-specific content sitting behind authentication.
I've looked into using Cache-Control but this appears to be a non-starter - on Chrome at least, refreshing a page within the same tab will trigger a request regardless of the Cache-Control header (cf iis - Is Chrome ignoring Cache-Control: max-age? - Stack Overflow)
I believe the answer may be rate limiting. If so, I wouldn't be able to do it based on IP because many of our customers share the same one. However I may be able to add a new header to identify a user and then apply rate limiting in Nginx based on this.
Does this sound like the way forward? This feels like it should be a fairly common problem!
CodePudding user response:
Nginx rate limiting is a fast configuration update if immediate mitigation is needed. As others have mentioned, caching would also be ideal when combined with this.
server {
# DoS Mitigation - Use IP and User Agent to prevent against NAT funnels from different computers
limit_req_zone $host$binary_remote_addr$http_user_agent zone=rails_per_sec:10m rate=2r/s;
upstream rails {...}
try_files $uri $uri/ @rails;
location @rails {
limit_req zone=rails_per_sec burst=10 nodelay;
...
}
}
The $http_authorization
header or a unique cookie (e.g. $cookie_foo
) could also be used to uniquely identify requests that would collide with the same IP/user-agent values.
limit_req_zone $host$binary_remote_addr$http_authorization ...;
limit_req_zone $host$binary_remote_addr$cookie_foo ...;