Home > database >  Are mutexes alone sufficient for thread safe operations?
Are mutexes alone sufficient for thread safe operations?

Time:12-12

Suppose we have multiple threads incrementing a common variable X, and each thread synchronizes by using a mutex M;

function_thread_n(){

ACQUIRE (M)
X  ;
RELEASE (M)

}

The mutex ensures that only one thread is updating X at any time, but does a mutex ensure that once updated the value of X is visible to the other threads too. Say the initial values of X is 2; thread 1 increments it to 3. However, the cache of another processor might have the earlier value of 2, and another thread can still end up incrementing the value of 2 to 3. The third condition for cache coherence only requires that the order of writes made by different processors holds, right?

I guess this is what memory barriers are for and if a memory barrier is used before releasing the mutex, then the issue can be avoided.

CodePudding user response:

This is a great question.
TL;DR: The short answer is "yes".

Mutexes provide three primary services:

  1. Mutual exclusion, to ensure that only one thread is executing instructions within the critical section between acquire and release of a given mutex.
  2. Compiler optimization fences, which prevent the compiler's optimizer from moving load/store instructions out of that critical section during compilation.
  3. Architectural memory barriers appropriate to the current architecture, which in general includes a memory acquire fence instruction during mutex acquire and a memory release fence during mutex release. These fences prevent superscalar processors from effectively reordering memory load/stores across the fence at runtime in a way that would cause them to appear to be "performed" outside the critical section.

The combination of all three ensure that data accesses within the critical section delimited by the mutex acquire/release will never observably race with data accesses from another thread who also protects its accesses using the same mutex.

Regarding the part of your question involving caches, coherent cache memory systems separately ensure that at any particular moment, a given line of memory is only writeable by at most one core at a time. Furthermore, memory store operations do not complete until they have evicted any "newly stale" copies cached elsewhere in the caching system (e.g. the L1 of other cores). See this question for more details.

  • Related