I found the following text while going through Java doc of Reentrant lock:
fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock.
As per my understanding it means, if the OS scheduler schedules the same thread (which was previously acquiring the lock) and it tries acquire the same lock again, Java would allow it to acquire and won't obey the fairness parameter value. Could someone please tell what could be the purpose of fairness parameter then and in what condition one should use it.
I am just thinking if its just like a priority value, which might influence the scheduler but cant guarantee the thread execution order.
CodePudding user response:
fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock.
I interpret "not progressing" to mean, "not progressing for reasons not related to the lock in question." I think they're trying to tell you that "fairness" only means anything when the lock is so heavily contested that there often are one or more threads awaiting their turn to lock it.
If thread T releases a "fair" lock that no other thread currently is awaiting, then "fairness" has no impact on which thread will get it next. That's just a straight-up race between the threads, as moderated by the OS scheduler.
It's only when multiple threads are waiting that a fair lock is supposed to "favor" the one that's been waiting the longest. In particular, I would hope that if some thread T releases a "fair" lock that other threads are awaiting, and then thread T immediately tries to lock it again, that the lock()
function would notice the other waiting threads, and send T to the back of the queue.
But, I don't actually know how it is implemented in any particular JVM.
P.S., IMO, "fairness" is like a bandage to stop the bleeding from a compound fracture. If your program has a lock that is so heavily contested that "fairness" would make any difference, then that's a serious design flaw.
The same Javadoc also says,
Programs using fair locks accessed by many threads may display lower overall throughput (i.e., are slower; often much slower) than those using the default setting.
CodePudding user response:
In a naïve view, the behavior of threads using a fair lock would be like
Thread 1 | Thread 2 | Thread 3 |
---|---|---|
Acquire | Do something | Do something |
Critical Section | Try Acquire | Do something |
Critical Section | Blocked | Try Acquire |
Release | Acquire | Blocked |
Do something | Critical Section | Blocked |
Try Acquire | Release | Acquire |
Blocked | Do something | Critical Section |
Acquire | Do something | Release |
“Try Acquire” refers to a call to lock()
that does not immediately succeed because another thread owns the lock. It does not refer to tryLock()
which isn’t fair in general.
In this naïve view, the threads get the lock in the order “Thread 1”, “Thread 2”, “Thread 3”, because that’s the order of acquisition attempts. Especially when “Thread 1” tries to acquire the lock right at the time “Thread 2” releases it, it won’t overtake as would happen with an unfair lock, but rather, “Thread 3” gets it because it waits longer.
But, as the documentation says, thread scheduling is not fair. So the following may happen instead.
Thread 1 | Thread 2 | Thread 3 |
---|---|---|
Acquire | Do something | Do something |
Critical Section | Do something | |
Critical Section | ||
Release | ||
Do something | ||
Acquire | Try Acquire | Try Acquire |
Critical Section | Blocked | Blocked |
Critical Section | Blocked | Blocked |
The empty cells represent phases in which the threads simply do not get any CPU time. There might be more threads than CPU cores, which includes the threads of other processes. The operating system may even prefer to let “Thread 1” continue on a core rather than switching to the other threads, simply because that thread does already run and switching takes time.
Generally, it’s not a good idea to try to predict the relative timing of reaching a certain point like the lock acquisition by the preceding workload. In an environment with an optimizing JIT compiler, even two threads executing exactly the same code with exactly the same input may have entirely different execution times.
So when we can’t predict the time of lock()
attempts, it’s not very useful to insist on the lock to get acquired in that unpredictable, unknown order. One explanation why developers still want fairness, is that even when the resulting order is not predictable, it should ensure that every thread makes progress instead of infinitely waiting for a lock while other threads are repeatedly overtaking. But this brings us back to the unfair thread scheduling; even when there is no lock at all, there is no guaranty that all threads make progress.
So why does the fairness option still exist? Because sometimes, people are fine with the way it works in most cases, even when there is no strong guaranty that it will always work that way. Or simply, because developers would repeatedly ask for it if it didn’t exist. Supporting fairness doesn’t cost much and doesn’t affect the performance of the unfair locks.