Home > OS >  Reader/Writer: multiple heavy readers, only 1 write per day
Reader/Writer: multiple heavy readers, only 1 write per day

Time:11-18

I have a huge tbb::concurrent_unordered_map that gets "read" heavily by multiple (~60) threads concurrently.

Once per day I need to clear it (either fully, or selectively). Erasing is obviously not thread safe in tbb implementation, so some synchronisation needs to be in place to prevent UB.

However I know for a fact that the "write" will only happen once per day (the exact time is unknown).

I've been looking at std::shared_mutexto allow concurrent reads but I am afraid that even in an uncontended scenario might slow things significantly.

Is there a better solution for this?

Perhaps checking a std::atomic<bool> before locking on the mutex?

Thanks in advance.

CodePudding user response:

This is a perfect job for a read/write lock.

In C this can be implemented by having a shared_mutex, then using a unique_lock to lock it for writing, and a shared_lock to lock it for reading. See this post for a example code.

The end effect is that readers will never block on eachother, reads can all happen at the same time, but if the writer has the lock, everything will block to let the writing operation proceed.

If the writing takes a long time, so long that the once-per-day delay is unacceptable, then you can have the writer create and populate a new copy of the data without taking a lock, then take the write end of the lock and swap the data:

Readers:

  1. Lock mutex with a shared_lock
  2. Do stuff
  3. Unlock
  4. Repeat

Writer:

  1. Create new copy of data
  2. Lock mutex with a unique_lock
  3. Swap data quickly
  4. Unlock
  5. Repeat tomorrow

A shared_lock on a shared_mutex will be fast. You could use a double check locking strategy but I would not do that until you do performance testing and also probably take a look at the source for shared_lock, because I suspect it does something similar already, and a double-check on the read end might just add overhead unnecessarily. Also I don't have enough coffee in me yet to work out double check locking in a read/write lock scenario.

There is a threading construct called a spin lock as well, but it's really just an encapsulation of a double-checked lock that repeats the "check" until it clears. It's a good construct but again you'll want performance analyses and a look at the shared_lock shared_mutex source, because they might spin already. A good implementation of a spin lock can be found here, it covers some common gotchas. You might have to tweak it to get a read/write spinlock.

Generally speaking though, it's best to use existing constructs during the initial implementation at the very least as a clearly coded proof-of-concept. Then if you know that you're seeing too much read contention, you can optimize from there. But you need the general strategy down first, and 91 times out of a hundred, it's good enough. In this case, no matter what, some manifestation of a read/write lock is what you're going to end up with.

CodePudding user response:

It might require a bit of extra work on maintaining it, but you can use copy-on-write scheme.

Keep the map in a singleton within a shared pointer.

For "read" operations, have users thread-safely copy the shared pointer and use it for as long as they want.

For "write" operations, create a new instance map in a new shared pointer, fill it with whatever you want and replace the old version it in the singleton.

This way "read" users will still see the old version and can use it safely. Just make sure they occasionally get the newest version from the singleton. Perhaps, even give them a handle that automatically updates the shared pointer once a second or something.

This works in case you don't need the threads to synchronously update all at once.

Another scheme, you create atomic boolean to indicate when an update is incoming, and just make all threads pause their operations on the map when it is true. Once they all stopped you perform the update and let them resume their operation.

  • Related