I was reading an interesting article about the memory barriers and their role in JVM concurrency, and the example implementation of Dekker's algorithm took my attention
volatile boolean intentFirst = false;
volatile boolean intentSecond = false;
volatile int turn = 0;
// code run by first thread // code run by second thread
1 intentFirst = true; intentSecond = true;
2
3 while (intentSecond) { while (intentFirst) { // volatile read
4 if (turn != 0) { if (turn != 1) { // volatile read
5 intentFirst = false; intentSecond = false;
6 while (turn != 0) {} while (turn != 1) {}
7 intentFirst = true; intentSecond = true;
8 } }
9 } }
10 criticalSection(); criticalSection();
11
12 turn = 1; turn = 0; // volatile write
13 intentFirst = false; intentSecond = false; // volatile write
The article mentions that since volatiles are sequentially consistent, the critical section is bound to be executed by one thread, which checks out with the happens-before guarantee. However, does that still hold if the two threads continue to execute the same logic in a loop? My understating is that the underlying OS scheduler may decide to pause the second thread in subsequent execution just before line 7 is executed and leave the first thread hit the critical section, and at the same moment, the OS resume the second thread and hit the critical section simultaneously. Is my understanding correct, and this example is given with the idea that this code is executed only once? If so, I'd assume that the answer to my question is "no" since volatiles are used only for memory visibility guarantees.
CodePudding user response:
My understating is that the underlying OS scheduler may decide to pause the second thread in subsequent execution just before line 7 is executed and leave the first thread hit the critical section, and at the same moment, the OS resume the second thread and hit the critical section simultaneously.
This will not happen. If the second thread gets suspended and the first thread
is in the critical section, the intentFirst
is still true since this is only set to false after the first thread leaves the critical section. So if the second thread wakes up, the intendSecond
is set to true, but the second thread will be stuck in the while(intendedFirst)
loop and will delay entering the critical section till the first thread has excited.
IMHO the most interesting part of Dekkers algorithm with respect to happens-before is here:
intentSecond=true; // volatile write
while(intendFirst) // volatile read
Modern processors have store buffers and this could lead to older stores being reordered with newer loads to a different address. That is why a [StoreLoad] is needed between the earlier store and the later load to ensure that they are sequentially consistent.
CodePudding user response:
The compiler ensures that the compiled code on the platform - in this case the JVM - makes certain guarantees for volatile variables, in terms of reordering of statements (on many levels) and about visibility between threads. On most platforms that have a JVM implementation, this can be done without the involvement of the operating system, and no scheduling is involved.
Guarantees about reordering and memory barriers are themselves no exclusion mechanism, needed to enter a critical section. These can be implemented in many different ways, for example by Dekker's algorithm. Dekker's algorithm is a so-called busy algorithm, that needs the CPU to work for as long as it is not allowed into the critical section. On modern hardware, simpler algorithms are possible that can use CAS operations, for example
// declare an atomic boolean (package java.util.concurrent.atomic)
AtomicBoolean busyFlag = new AtomicBoolean(false);
// the part with the critical section, same for all threads
while (!busyFlag.compareAndSet(false, true)) { /* busy wait */ }
try {
criticalSection();
}
finally {
busyFlag.set(false);
}
In this case, [compareAndSet][3]
ensures that the flag is atomically set to true if and only if it is false, and returns true when that happens.
However, just like the Dekker algorithm, this is still a busy mechanism that can cost you a lot of CPU if you are unlucky. A synchronized
section or a Lock will ask the operating system to suspend the current thread when the lock cannot be obtained. But that has nothing to do with volatile
.