Home > Blockchain >  Sloppy Counter and Multiple Locks in same program
Sloppy Counter and Multiple Locks in same program

Time:10-30

This is the code on implementation of Sloppy Counter from OSTEP that I am currently reading, and there are a few things that I don't understand

The following code is assumed to run on a 4-core CPU

typedef struct __counter_t {

  int global; // global count
  pthread_mutex_t glock; // global lock

  int local[NUMCPUS]; // local count (per cpu)
  pthread_mutex_t llock[NUMCPUS]; // ... and locks

  int threshold; // update frequency

}counter_t;



// init: record threshold, init locks, init values
// of all local counts and global count

void init(counter_t*  c, int threshold) {
    c->threshold = threshold;

    c->global = 0;
    pthread_mutex_init(&c->glock, NULL);

    int i;
    for (i = 0; i < NUMCPUS; i  ) {

      c->local[i] = 0;

      pthread_mutex_init(&c->llock[i], NULL);

    }

    // update: usually, just grab local lock and update local amount
    // once local count has risen by ‘threshold’, grab global
    // lock and transfer local values to it
    void update(counter_t* c, int threadID, int amt) {
      pthread_mutex_lock(&c->llock[threadID]);
      c->local[threadID]  = amt; // assumes amt > 0
      if (c->local[threadID] >= c->threshold) { // transfer to global
        pthread_mutex_lock(&c->glock);
        c->global  = c->local[threadID];
        pthread_mutex_unlock(&c->glock);
        c->local[threadID] = 0;
      }
      pthread_mutex_unlock(&c->llock[threadID]);

    }

    // get: just return global amount (which may not be perfect)
    int get(counter_t* c) {

        pthread_mutex_lock(&c->glock);

        int val = c->global;

        pthread_mutex_unlock(&c->glock);

        return val; // only approximate!
    }

Why must there be a lock for each local counter in __counter_t? In update() function, the id of the thread is passed in as argument, so doesn't that mean that only one thread would be able to access local[threadID]? And if context switch happens, the other thread would only access the local[threadID] that corresponds to their threadID. I don't understand why the threads must be locked before accessing their own local[NUMCPUS] since each element inside the array would not be accessed by the other threads other than their own, and no other thread would call update() with same threadID

CodePudding user response:

Why must there be a lock for each local counter

To quote the book

In addition to these counters, there are also locks: one for each local counter1, and one for the global counter.

What does the 1 mean? At the end of the page:

1 We need the local locks because we assume there may be more than one thread on each core. If, instead, only one thread ran on each core, no local lock would be needed.

  • Related