Home > Blockchain >  Running a maximum of two go routines continuously forever
Running a maximum of two go routines continuously forever

Time:10-30

I'm trying to run a function concurrently. It makes a call to my DB that may take 2-10 seconds. I would like it to continue on to the next routine once it has finished, even if the other one is still processing, but only ever want it be processing a max of 2 at a time. I want this to happen indefinitely. I feel like I'm almost there, but waitGroup forces both routines to wait until completion prior to continuing to another iteration.

const ROUTINES = 2;
for {
            var wg sync.WaitGroup
            _, err:= db.Exec(`Random DB Call`)
            if err != nil {
                panic(err)
            }
            ch := createRoutines(db, &wg)
            wg.Add(ROUTINES)
            for i := 1; i <= ROUTINES; i   {
                ch <- i
                time.Sleep(2 * time.Second)
            }

            close(ch)
            wg.Wait() 
        }


func createRoutines(db *sqlx.DB, wg *sync.WaitGroup) chan int {
    var ch = make(chan int, 5)
    for i := 0; i < ROUTINES ; i   {
        go func(db *sqlx.DB) {
            defer wg.Done()
            for {
                _, ok := <-ch
                if !ok { 
                    return
                }
                doStuff(db) 

            }
        }(db)

    }
    return ch
}

CodePudding user response:

This adds an external dependency, but consider this implementation:

package main

import (
    "context"
    "database/sql"
    "log"

    "github.com/MicahParks/ctxerrpool"
)

func main() {

    // Create a pool of 2 workers for database queries. Log any errors.
    databasePool := ctxerrpool.New(2, func(_ ctxerrpool.Pool, err error) {
        log.Printf("Failed to execute database query.\nError: %s", err.Error())
    })

    // Get a list of queries to execute.
    queries := []string{
        "SELECT first_name, last_name FROM customers",
        "SELECT price FROM inventory WHERE sku='1234'",
        "other queries...",
    }

    // TODO Make a database connection.
    var db *sql.DB

    for _, query := range queries {

        // Intentionally shadow the looped variable for scope.
        query := query

        // Perform the query on a worker. If no worker is ready, it will block until one is.
        databasePool.AddWorkItem(context.TODO(), func(workCtx context.Context) (err error) {
            _, err = db.ExecContext(workCtx, query)
            return err
        })
    }

    // Wait for all workers to finish.
    databasePool.Wait()
}

CodePudding user response:

If you need to only have n number of goroutines running at the same time, you can have a buffered channel of size n and use that to block creating new goroutines when there is no space left, something like this

package main

import (
    "fmt"
    "math/rand"
    "time"
)

func main() {

    const ROUTINES = 2
    rand.Seed(time.Now().UnixNano())

    stopper := make(chan struct{}, ROUTINES)
    var counter int

    for {
        counter  
        stopper <- struct{}{}
        go func(c int) {
            fmt.Println("  Starting goroutine", c)
            time.Sleep(time.Duration(rand.Intn(3)) * time.Second)
            fmt.Println("- Stopping goroutine", c)
            <-stopper
        }(counter)
    }
}

In this example you see how you can only have ROUTINES number of goroutines that live 0, 1 or 2 seconds. In the output you can also see how every time one goroutine ends another one starts.

  • Related