Home > Software design >  Establish a release atomic ordering on an atomic object without writing into it
Establish a release atomic ordering on an atomic object without writing into it

Time:02-01

I'm using Rust, but Rust implements the C atomic memory model, so I will present my question in C .

I have an atomic object M. I want to issue an pseudo load/store operation on M, so that the store this operation will "read from" will happen-before this operation, and all loads that will "read from" this store will happen-after this operation. Basically, I want a memory_order_acq_rel, but without changing the value of M.

The first part is easy: a memory_order_acquire load will suffice. But for the second part I need a memory_order_release store, and I don't know the current value of the atomic so I can't store it.

I know I can implement that with a compare-exchange loop that loads the value of M and stores it back:

void create_acq_rel(std::atomic<int>& object)
{
    int value = object.load(std::memory_order_acquire);
    while (!object.compare_exchange_weak(
        value, value,
        std::memory_order_release, std::memory_order_acquire
    ))
    {
    }
}

However, an obvious downside of this approach is that it generates a compare-exchange loop for no real need. Is it possible to implement that more efficiently?

At first I thought fences could help, but it seems like fences need actual load/store to synchronize, is this true?

I don't want to change the code this should synchronize with before/after, only this part of code because I think this will be simpler (I even prefer the compare-exchange loop to changing that code, because a) it is lot more code and b) it is in the hot path while this code is not).


Context: I have two lock-free linked list (a list of partially empty chunks and a list of full chunks, in an arena). Threads mostly traverse the first list (to find a place to allocate), but I may move an element from the first list to the second list (when a chunk becomes full) and a thread currently traversing it will continue its traversal in the second list. The first list is fully synchronized on the list head: adding new elements to the list is done only after the initialization of all previous elements, so I can be sure that threads traversing this list will only visit fully initialized elements, as they load the list head and its element is initialized before it is put into the list and all elements after it (I append at the beginning of the lists) are initialized before it. But sometimes it happens that I append an element directly to the second list (when an element is too big to fit in a chunk, I allocate a chunk specifically for it), and now threads that were traversing the first list, and continued their traversal in the second list, may see it uninitialized because it is not synchronized with the first list's head as the other elements. To fix that issue, I want the addition of this element to participate in the elements initialization chain, so initialization of prior elements happen-before it and it happens-before initialization of future elements. I know there can be other ways to synchronize it (for example, by synchronizing on the next pointers), but as I said, I want to touch only the code appending the element directly to the second list.

CodePudding user response:

You can use fetch_add to add 0 to the value like

M.fetch_add(0, std::memory_order_acq_rel);

This performs a read-modify-write operation and memory is affected according to the value of the order specified in the second parameter.

  • Related