Home > Software engineering >  Why is the sufficient set of synchronization edges unique and how to build it?
Why is the sufficient set of synchronization edges unique and how to build it?

Time:11-07

JLS states this:

A set of synchronization edges, S, is sufficient if it is the minimal set such that the transitive closure of S with the program order determines all of the happens-before edges in the execution. This set is unique.

Why is the sufficient set unique?

How can we determine which synchronization edges are in the sufficient set and which aren't?

CodePudding user response:

I found the answers.

Why is the sufficient set unique?

According to the JLS happens-before is a strict partial order:

The happens-before order is given by the transitive closure of synchronizes-with edges and program order. It must be a valid partial order: reflexive, transitive and antisymmetric.

That means happens-before can be represented as a directed acyclic graph (DAG):

Strict partial orders correspond directly to directed acyclic graphs (DAGs)

According to the wiki:

The transitive reduction of a DAG is the graph with the fewest edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the covering relation of the reachability relation ≤ of the DAG. It is a subgraph of the DAG, formed by discarding the edges u → v for which the DAG also contains a longer directed path from u to v. Like the transitive closure, the transitive reduction is uniquely defined for DAGs.

This "transitive reduction" is "the sufficient minimal set" in the question.

How can we determine which synchronization edges are in the sufficient set and which aren't?

From the wiki:

It is a subgraph of the DAG, formed by discarding the edges u → v for which the DAG also contains a longer directed path from u to v.

CodePudding user response:

This is an answer to the actual question you placed in the comment:

I don't care. I asked the question because I'm curious about possible optimizations: "the minimal set" means some synchronizes-with edges are not used and thus can be optimized-out.

I'm trying to get a good understanding of this topic, so the answer I'm providing is also meant for me to structure my thoughts.

The JMM is a specification for an abstract machine. It defines all possible legal executions and each execution has some result. An execution is legal if every read sees a write that is either the most recent write before it in the happens-before order or a write it is in a data race with (let's ignore causality for the sake of simplicity).

A JVM implementation is not obligated to emit every possible legal execution. And for every emitted execution, it only matters that the result of that execution is indistinguishable from a different execution as allowed by the JMM.

So in other words, as long as the result of an execution is correct, nobody cares how the result was established.

Let's have a look at a concrete example:

volatile int a=0

thread1:
     a =1
     a =1

thread2:
     r1=a

All possible legal results are r1={0,1,2}. A valid JVM implementation could optimize this as follows:

volatile int a=0

thread1:
      a =2

thread2:
     r1=a

This is valid since the results are r1={0,2} which is a subset of the legal results.

The happens-before order (including the synchronizes-with order) is only used to determine the possible legal results and therefore you don't need to optimize any of them out.

The JVM implementation is by no means obligated to preserve the original synchronization operations and hence has a lot of liberty for performance optimizations.

For a better explanation, check out this great presentation from Aleksey Shipilev

  • Related