I'm trying to cache requests using singleflight which works out of the box.
I want to go one further and subsequently retry failed (errored requests) for the same key. For that i'm making a call to group.Forget(Key). But subsequent calls just seem to reuse the prior result and not retry.
type Result struct {
v int
k string
}
var group singleflight.Group
// see https://encore.dev/blog/advanced-go-concurrency
func main() {
if true {
for k := 0; k <= 2; k {
go doGroup(context.Background(), "sameKey")
}
<-time.Tick(5 * time.Second)
for k := 0; k <= 3; k {
go doGroup(context.Background(), "sameKey")
}
<-time.Tick(30 * time.Second)
}
}
func doGroup(ctx context.Context, key string) (*Result, error) {
log.Println("Inside normal call")
results, err, shared := group.Do(key, func() (interface{}, error) {
r, e := doExpensive(ctx, key)
// Do this; so if it encountered an error;
// subsequent calls will retry
// didnt work
// perhaps because of timing
if e != nil {
group.Forget(key)
}
return r, e
})
fmt.Printf("Call to multiple callers: %v\n", shared)
// does not retry if error occured
if err != nil {
wrapped := fmt.Errorf("error bruh %s: %w", key, err)
fmt.Printf("%s\n", wrapped.Error())
return nil, wrapped
}
fmt.Printf("Results: %v\n", results)
return results.(*Result), err
}
func doExpensive(ctx context.Context, key string) (*Result, error) {
log.Printf("Inside Expensive function with key %s\n", key)
<-time.Tick(time.Second * 10)
dice := rand.Int31n(10)
if true {
// <-time.Tick(time.Millisecond * time.Duration(dice*100))
return nil, errors.New("operation failed")
}
<-time.Tick(time.Second * time.Duration(dice))
return &Result{
v: int(dice),
k: key,
}, nil
}
I've simulated waits between calls to doGroup so the key is actually forgotten by the second call. But the doExpensive function only seems to ever be called once.
A reproduction of my code can be found here
https://go.dev/play/p/psGjFTypU6C
CodePudding user response:
The issue here is combination of timing and behaviour of Forget
method. As it is somewhat stated in the documentation:
Forget tells the singleflight to forget about a key. Future calls to Do for this key will call the function rather than waiting for an earlier call to complete.
And Future there means all the calls to group.Do
happened after the call to group.Forget
. In your example all the calls to group.Do
happened before the call of group.Forget
, and all of them got the result of the first failed call. Possible approach is to do trigger retries outside of the `group.Do call. Something like this:
package main
import (
"context"
"errors"
"log"
"math/rand"
"sync/atomic"
"time"
"golang.org/x/sync/singleflight"
)
type Result struct {
v int
k string
}
var group singleflight.Group
func main() {
for k := 0; k <= 2; k {
go doGroup(context.Background(), "sameKey")
}
<-time.Tick(5 * time.Second)
for k := 0; k <= 3; k {
go doGroup(context.Background(), "sameKey")
}
<-time.Tick(30 * time.Second)
}
func doGroup(ctx context.Context, key string) (*Result, error) {
log.Println("Inside normal call")
for {
results, err, shared := group.Do(key, func() (interface{}, error) {
return doExpensive(ctx, key)
})
if err != nil {
log.Printf("Normal call error: %s. Will retry \n", err)
continue
}
log.Printf("Normal call results: %v [shared=%v]\n", results, shared)
return results.(*Result), err
}
}
var returnedFirstErr atomic.Bool
func doExpensive(ctx context.Context, key string) (r *Result, e error) {
log.Printf("Inside Expensive function with key %s\n", key)
defer func() {
log.Printf("Result of Expensive function: [%v, %s] for %s\n", r, e, key)
}()
<-time.Tick(time.Second * 10)
dice := rand.Int31n(10)
if !returnedFirstErr.Load() {
returnedFirstErr.Store(true)
return nil, errors.New("operation failed")
}
return &Result{
v: int(dice),
k: key,
}, nil
}
Side question. Are you sure behavior of singleflight
is what you need, maybe you should use sync.Once instead? In case of singleflight
you prevent multiple calls from happening at the same time, i.e. calls which done later in time are still executed. In case of sync.Once
the call is done exactly once in the lifetime of process