Home > Blockchain >  Why is the Anderson queue lock not fair?
Why is the Anderson queue lock not fair?

Time:10-16

Anderson et al. (1990) describes a pedagogically simple spin lock that uses an array of length N to implement a FIFO mechanism for mutual exclusion. In §7.5.1 of The Art of Multiprocessor Programming by Herlihy et al. (2nd ed., 2020), this lock is implemented as follows (verbatim from the book):

public class ALock {
    ThreadLocal<Integer> mySlotIndex = new ThreadLocal<>() {
        @Override protected Integer initialValue() { return 0; }
    };
    AtomicInteger tail;
    volatile boolean[] flag;
    int size;

    public ALock(int capacity) {
        size = capacity;
        tail = new AtomicInteger(0);
        flag = new boolean[capacity];
        flag[0] = true;
    }

    public void lock() {
        int slot = tail.getAndIncrement() % size;
        mySlotIndex.set(slot);
        while (!flag[slot]) {};
    }

    public void unlock() {
        int slot = mySlotIndex.get();
        flag[slot] = false;
        flag[(slot   1) % size] = true;
    }
}

I have a simple test program in Java that creates N threads that acquire the lock to increment a global counter and a per-thread counter. At the end of the test, I compare the global counter to the sum of the per-thread counters (to check correctness). Further, if the lock is fair, the per-thread counters should be more or less equal.

public class Main {
    static long COUNT = 0;
    static int NUM_THREADS = 16;
//    static Lock LOCK = new ReentrantLock(true);
    static ALock LOCK = new ALock(NUM_THREADS);
    static long[] RUNS_PER_THREAD = new long[NUM_THREADS];
    static Map<Long, Integer> THREAD_IDS = new HashMap<>();

    public static void main(String[] args) {
        var threads = IntStream.range(0, NUM_THREADS).mapToObj(Main::makeWorker).toArray(Thread[]::new);
        for (int i = 0; i < threads.length; i  ) THREAD_IDS.put(threads[i].getId(), i);
        for (var thread: threads) thread.start();
        try { Thread.sleep(300L); } catch (InterruptedException e) {}
        for (var thread: threads) thread.interrupt();
        try { Thread.sleep(100L); } catch (InterruptedException e) {}
        for (int i = 0; i < NUM_THREADS; i  ) System.out.printf("Thread %d:\td%n", i, RUNS_PER_THREAD[i]);
        System.out.println("Counted up to: \t\t\t"   COUNT);
        System.out.println("Sum for all threads: \t"   Arrays.stream(RUNS_PER_THREAD).sum());
    }

    private static Thread makeWorker(int i) {
        return new Thread(() -> {
            while (true) {
                if (Thread.interrupted()) return;
                LOCK.lock();
                try {
                    COUNT  ;
                    var id = THREAD_IDS.get(Thread.currentThread().getId());
                    RUNS_PER_THREAD[id]  ;
                } finally {
                    LOCK.unlock();
                }}});
    }
}

If I use the test with a fair ReentrantLock (commented-out line), the lock appears to be both correct and fair. If I use the Anderson queue lock as described in Herlihy, the lock appears to be correct, but the first few threads appear to acquire the lock an order of magnitude more often than the last few threads. How is this possible, given the lock's implementation?

Equipment: MBP with M1 Max (10 cores) with Java 17.

CodePudding user response:

The Anderson lock implements a FIFO policy -- threads acquire the lock in the order in which they make requests to do so (provided that the lock's capacity is not exceeded). Therefore, if you observe in your test that some threads acquire the lock many more times than others do then that is not because the Anderson lock is not fair, but rather because lock requests were unevenly distributed.

As a spinlock, the Anderson lock will be very fast to acquire when uncontested or only lightly contested. Your test threads do very little work while holding the lock. It is entirely possible, then, that the first threads out of the gate run up a large number of iterations very quickly before the full set of threads enter the queue for the lock and slow everyone down.

Bottom line: the Anderson lock appears to be fair, but fairness is relative to the threads actually in contention for the lock at any given time, and that is not necessarily even.

That ReentrantLock does not show the same distribution of test thread iterations shows that it is implemented differently, which you already knew, but does not mean that it is any more fair than the Anderson lock.

  • Related