I am looking at one of the simplest examples I found and started to reason about SO
(synchronization order) or more precise, lack of it. Consider the example below:
int a, b; // two shared variables
Thread-X:
void threadX() {
synchronized(this) {
a = 1;
}
synchronized(this) {
b = 1;
}
}
And a reader thread, Thread-Y
:
void threadY() {
int r1 = b;
int r2 = a;
}
For simplicity let's suppose that Thread-Y
does the reads exactly in this order: it will for sure first read b
and then a
(in reverse of the writing that is).
Reading thread is allowed to see [1, 0]
(like b=1
happened before a=1
). I think I understand also why: because there is no synchronization order between the two actions, and as such there is no happens-before and according to the JLS
this is a data race:
When a program contains two conflicting accesses that are not ordered by a happens-before relationship, it is said to contain a data race.
Thus reading a
and b
are two racy reads, so seeing b=1
and a=0
is allowed and possible.
Now this in turn allows JVM to do the lock coarsening in the writer, so it becomes:
void threadX() {
synchronized(this) {
a = 1;
b = 1;
}
}
My question is, if the reader was initially written like this:
void threadY() {
synchronized(this) {
int r1 = b;
}
synchronized(this) {
int r2 = a;
}
}
Would lock coarsening still be allowed? I think I know the answer, but I would want to hear an educated explanation too.
CodePudding user response:
The lock coarsening (and reordering) is allowed because the synchronized readers either get ordered before or after the coarsened lock. They will never be able to see what is happening while the coarsened lock is held and hence can't observe any reordering inside the locked code.
For more information see: https://shipilev.net/blog/2016/close-encounters-of-jmm-kind/#myth-barriers-are-sane
Nice question btw. I was struggling with this particular example as well some time ago :)
CodePudding user response:
Yes it's allowed.
Here is a simple explanation.
Remember that synchronized
blocks:
- execute atomically:
- a thread cannot enter a
synchronized
block while another thread is holding the same lock - when a thread enters the
synchronized
block, then it immediately sees everything that was done in previously executedsynchronized
blocks
- a thread cannot enter a
- execute in a global order consistent with a program order of every thread (the synchronization order you mentioned)
In other words, synchronized
blocks always execute atomically in a global sequential order.
Different executions can differ in how the synchronized
blocks are interleaved, but it is always the case, that:
- the 1st
synchronized
block fromthreadX()
always executes before the 2nd one - same for
synchronized
block fromthreadY()
There are 6 possible interleavings:
threadX threadY threadX threadY threadX threadY
------------------------------- ------------------------------- -------------------------------
synchronized { | synchronized { | synchronized { |
a = 1; | a = 1; | a = 1; |
} | } | } |
synchronized { | | synchronized { | synchronized {
b = 1; | | int r1 = b; | int r1 = b;
} | | } | }
| synchronized { synchronized { | | synchronized {
| int r1 = b; b = 1; | | int r2 = a;
| } } | | }
| synchronized { | synchronized { synchronized { |
| int r2 = a; | int r2 = a; b = 1 |
| } | } } | }
(Case A) (Case B) (Case C)
threadX threadY threadX threadY threadX threadY
------------------------------- ------------------------------- -------------------------------
| synchronized { | synchronized { | synchronized {
| int r1 = b; | int r1 = b; | int r1 = b;
| } | } | }
| synchronized { synchronized { | synchronized { |
| int r2 = a; a = 1; | a = 1; |
| } } | } |
synchronized { | | synchronized { synchronized { |
a = 1; | | int r2 = a; b = 1; |
} | | } } |
synchronized { | synchronized { | | synchronized {
b = 1; | b = 1; | | int r2 = a;
} | } | | }
(Case D) (Case E) (Case F)
When you merge synchronized
blocks in threadY()
:
void threadY() { void threadY() {
synchronized(this) { synchronized(this) {
int r1 = b; int r1 = b;
} => int r2 = a;
synchronized(this) { }
int r2 = a; }
}
}
Then you effectively only leave as allowed cases where synchronized
blocks from threadY()
are next to each other: i.e. cases A, C and D.
Since no new possible executions appear after this optimization, then this optimization is legal.
For more strict and detailed explanations I would recommend:
- "Lock Coarsening" chapter in J. Manson's Ph.D. Thesis on JMM
- Lock Coarsening example in an article by A. Shipilev as recommended in the answer above
CodePudding user response:
Lock coarsening is always possible if the JVM can prove that the subsequent synchronized
blocks will use the same object. Even the reader
void threadY() {
synchronized(this) {
int r1 = b;
}
synchronized(this) {
int r2 = a;
}
}
may get optimized too
void threadY() {
synchronized(this) {
int r1 = b;
int r2 = a;
}
}
But if both methods are instance methods of the same class, in other words, their this
is referring to the same object, their execution can’t be overlapping. Therefore, even if the reads and/or the writes get reordered, the result [1, 0]
is impossible.
Even lock elimination is allowed, as long as the execution environment can ensure that the result [1, 0]
will never happen. One well known example would be the scenario that the object provenly is never seen by a different thread.