Home > Net >  is a native_handle_type of pthread_t * guaranteed for libc and libstdc ?
is a native_handle_type of pthread_t * guaranteed for libc and libstdc ?

Time:05-25

Look at this code:

#include <pthread.h>
#include <mutex>

int main()
{
    std::mutex mtx;
    pthread_mutex_t *native = mtx.native_handle();
}

Do libstdc or libc guarantee that the native_handle of a std::mutex is always a pthread_mutex_t* pointer? That would be nice because I can adjust the spin count of this std::mutex implementation with that.

Windows gives only a void* pointer for native_handle and I don't know its purpose. If I cast it to a CRITICAL_SECTION and call any Windows' own calls on it, I have a crash.

Does Windows return the handle used in the slow path synchronization across the kernel when there is contention?

CodePudding user response:

If you want to future-proof your implementation, you can use overload resolution to dispatch between different native_handle types.

#include <mutex>

#ifndef WIN32
# include <pthread.h>

void tune_mutex(pthread_mutex_t* mutex)
{
  // adjust spin-count
}
#endif

void tune_mutex(void*)
{
  // Fallback for unknown types. Do nothing
}

int main()
{
  std::mutex mutex;
  tune_mutex(mutex.native_handle());
}

At least for GCC and Clang this should work. Also note that changing the return type would break the existing ABI (even if the standard allows it), so you can be reasonably sure that this won't change anytime soon. And I don't see a reason why it should change. People would want whatever improvements come to one mutex type also be present in the other.

For Windows, there is a note in the documentation

native_handle_type is defined as a Concurrency::critical_section* that's cast as void*

CodePudding user response:

I guess mutex::lock() and mutex::unlock() arent backed by EnterCriticalSection() and LeaveCriticialSection under Windows.
I wrote this little program:

#include <Windows.h>
#include <iostream>
#include <mutex>

using namespace std;

int main()
{
    CRITICAL_SECTION cs;
    InitializeCriticalSection( &cs );
    EnterCriticalSection( &cs );
    LeaveCriticalSection( &cs );
    mutex mtx;
    for( size_t i = 100'000'000; i--; )
        mtx.lock(),
        mtx.unlock();
    cout << "finished" << endl;
}

With the calls to EnterCriticalSection() and LeaveCriticalSection() I set two breakpoints on the DLL entry points of these functions. And both aren't hit in the loop. So I think std::mutex is implemented with the usual combination of an atomic variable and a binary semaphore under Windows.

But I thought something more in-depth about spinning: Spinning makes sense when a) the mutex is held only a very short time that spinning might succeed and b) when there's a extremely high locking and unlocking frequency across all cores. I think that's rather a rare case.

  • Related