Home > Net >  Go rate limit http.client via RoundTrip exceeds limit and produces fatal panic
Go rate limit http.client via RoundTrip exceeds limit and produces fatal panic

Time:11-03

My goal: is to set a rate limit of 600 requests per minute, which is reset at the next minute. My intend was to do this via the http.client setting a RoundTrip with a limit.wait(). So that I can set different limits for different http.clients() and have the limiting handled via roundtrip rather than adding complexity to my code elsewhere.

The issue is that the rate limit is not honoured, I still exceed the number of requests allowed and setting a timeout produces a fatal panic net/http: request canceled (Client.Timeout exceeded while awaiting headers)

I have created a barebones main.go that replicates the issue. Note that the 64000 loop is a realistic scenario for me.

Update: setting ratelimiter: rate.NewLimiter(10, 10), still exceeds the 600 rate limit somehow and produces errors Context deadline exceeded with the set timeout.

package main

import (
    "fmt"
    "io/ioutil"
    "net/http"
    "sync"
    "time"

    "golang.org/x/time/rate"
)

var client http.Client

// ThrottledTransport Rate Limited HTTP Client
type ThrottledTransport struct {
    roundTripperWrap http.RoundTripper
    ratelimiter      *rate.Limiter
}

func (c *ThrottledTransport) RoundTrip(r *http.Request) (*http.Response, error) {
    err := c.ratelimiter.Wait(r.Context()) // This is a blocking call. Honors the rate limit
    if err != nil {
        return nil, err
    }
    return c.roundTripperWrap.RoundTrip(r)
}

// NewRateLimitedTransport wraps transportWrap with a rate limitter
func NewRateLimitedTransport(transportWrap http.RoundTripper) http.RoundTripper {
    return &ThrottledTransport{
        roundTripperWrap: transportWrap,
        //ratelimiter:      rate.NewLimiter(rate.Every(limitPeriod), requestCount),
        ratelimiter: rate.NewLimiter(10, 10),
    }
}

func main() {
    concurrency := 20
    var ch = make(chan int, concurrency)
    var wg sync.WaitGroup

    wg.Add(concurrency)
    for i := 0; i < concurrency; i   {
        go func() {
            for {
                a, ok := <-ch
                if !ok { // if there is nothing to do and the channel has been closed then end the goroutine
                    wg.Done()
                    return
                }
                resp, err := client.Get("https://api.guildwars2.com/v2/items/12452")
                if err != nil {
                    fmt.Println(err)
                }
                body, err := ioutil.ReadAll(resp.Body)
                if err != nil {
                    fmt.Println(err)
                }
                fmt.Println(a, ":", string(body[4:29]))
            }
        }()
    }
    client = http.Client{}
    client.Timeout = time.Second * 10

    // Rate limits 600 requests per 60 seconds via RoundTripper
    transport := NewRateLimitedTransport(http.DefaultTransport)
    client.Transport = transport

    for i := 0; i < 64000; i   {
        ch <- i // add i to the queue
    }

    wg.Wait()
    fmt.Println("done")
}

CodePudding user response:

rate.NewLimiter(rate.Every(60*time.Second), 600) is not what you want.

According to https://pkg.go.dev/golang.org/x/time/rate#Limiter:

A Limiter controls how frequently events are allowed to happen. It implements a "token bucket" of size b, initially full and refilled at rate r tokens per second. Informally, in any large enough time interval, the Limiter limits the rate to r tokens per second, with a maximum burst size of b events.


func NewLimiter(r Limit, b int) *Limiter

NewLimiter returns a new Limiter that allows events up to rate r and permits bursts of at most b tokens.


func Every(interval time.Duration) Limit

Every converts a minimum time interval between events to a Limit.

rate.Every(60*time.Second) means that it will fill the bucket with 1 token every 60s. Namely, the rate is 1/60 tokens per second.

Most of the time, 600 requests per minute means that 600 requests are allowed at the beginning, and will be reset to 600 at the next minute at once. In my opinion, golang.org/x/time/rate does not fit this use case very well. Maybe rate.NewLimiter(10, 10) is a safe choice.

CodePudding user response:

Here is a playground example, where the roundTripper mocks the response from the guildwars API :

https://go.dev/play/p/FTw6IGo_moP

the only meaningful fixes to your code were:

  • don't try to read resp.Body in case of errors (perhaps that is the cause of your panics ?)
  • close the channel after the 64k iterations loop

The short answer is : with this setup (no network issues, no depending on the behavior of the actual api server), it works :

  • the rate limiter works as expected,
  • the requests do not time out
# excerpt from the output:
...
235 : "name": "Omnomberry Bar"
236 : "name": "Omnomberry Bar"
237 : "name": "Omnomberry Bar"
238 : "name": "Omnomberry Bar"
239 : "name": "Omnomberry Bar"
--- 60 reqs/sec
240 : "name": "Omnomberry Bar"
241 : "name": "Omnomberry Bar"
242 : "name": "Omnomberry Bar"
...

Perhaps your issue comes from contacting the actual server.

If it starts dropping connections without warning you, or giving responses with longer and longer delays, this could explain your timeout issues.

Try measuring the actual time a RoundTrip remains stuck on ratelimiter.Wait(), and the actual time taken for a request/response with the server.


I ran my examples with shorter bursts, if your program runs long enough (64k requests at 10 req/s is still 6400s, which is close to 2h ...), you may experience runtime issues :

since the transport checks the rate limit after the timeout on an individual request is set, if the runtime chooses (for some bad reason) to schedule one of the 20 workers waiting on rate.Wait(...) after 10 seconds, then you would hit your context.Deadline error.

(note: I have no fact to back this claim, just hypothesizing here)

The simplest workaround for that would be :

  • move the rate limiter outside of the transport,
  • check ratelimiter.Wait(...) right before client.Get(...).

Another option to test :

  • don't set client.Timeout,
  • have your transport set a timeout on the request after it has passed the ratelimiter.Wait(...) guard.
  • Related