5

We have implemented a reader writer lock with

 typedef boost::unique_lock<boost::shared_mutex> WriterLock;
 typedef boost::shared_lock<boost::shared_mutex> ReadersLock;

where we have many multithreaded readers and only a few writers. Readers share access with other readers but block on a writer. Writer blocks until it has exclusive access to the resource.

We could not find this in the boost documentation... What is the policy to prevent Writer starvation?
For example if there are many readers all pounding the lock from a thread pool, is there any guarantee upper limit on number of lock tries before the writer finally acquires the lock?

We been seeing performance numbers that seem to indicate the write has to wait until there are no readers at all and there are rare cases where that is a long time, since new readers can request locks while the current readers are being serviced. In that case it seems that in our code the writer is has to wait a very long time until there are no reads at all.

We would prefer a more queue like system, where when a writer request a lock, all the current readers drain out but all new incoming readers block behind the writer request.

What is the behavior of the upgradeable lock concept in Boost? Boost threads

Its does not say either way how it handles writer starvation.

meissnersd
  • 1,272
  • 2
  • 12
  • 21
  • http://stackoverflow.com/a/9385208/98654 states that the 'fairness' of `boost::shared_mutex` is implementation-specific, but there isn't a source for that claim. – Nate Kohl Oct 25 '12 at 20:10
  • Were you trying to give writers higher priority than readers? – PiotrNycz Oct 25 '12 at 20:25
  • PriotrNycz - no all thread had equal priority. Readers a fast but there potentially many readers. Only a few writers. So if writer is blocked it could be a long time if had to wait for no readers at all. – meissnersd Oct 26 '12 at 16:36
  • another interesting thread. http://stackoverflow.com/questions/989795/example-for-boost-shared-mutex-multiple-reads-one-write – meissnersd Oct 26 '12 at 16:37

3 Answers3

1

A small improvement on @Guerrero's solution that increases the fairness for multiple readers and multiple writers so no one would starve to death:

read() {
    while (atomic-write-requests > 0)
        condition.wait();
    ReadersLock lock(acquireReaderLock());
    doRead();
}
write() {
    while (atomic-write-requests > 0)
        condition.wait();
    atomic-write-requests++;
    WritersLock lock(acquireWriterLock());

    doWrite();
    atomic-write-requests--;
    condition.notify();
}

In this solution, a new and fair competition will start every time a writer leaves the scope.

Shloim
  • 5,281
  • 21
  • 36
0

Without knowing much about the boost implementations, perhaps you can prevent writer starvation with your implementation. The readers could wait while writers exist. Maybe like this pseudocode:

read() {
    while (atomic-write-requests > 0) {
        condition.wait();
    }
    ReadersLock lock(acquireReaderLock());
    doRead();
}
write() {
    atomic-write-requests++;
    WritersLock lock(acquireWriterLock());

    doWrite();
    atomic-write-requests--;
    condition.notify();
}
Guerrero
  • 534
  • 4
  • 11
  • yeah we are thinking about a scheme like this. It will be a bit tricky to make it completely water tight, no race conditions etc. – meissnersd Oct 26 '12 at 16:38
  • This is unfair towards readers. If multiple writers exist, then they might starve out the readers. I've posted a similar solution for this issue that increases the fairness. – Shloim Nov 17 '15 at 09:00
0

If what you want is a fifo approach, boost implements several scheduling strategies (including FIFO) on its statechart library. I am guessing you will have to adapt a lot of your code to use this though. Checkout the documentation about fifo_scheduler and FifoWorker:

http://www.boost.org/doc/libs/1_51_0/libs/statechart/doc/reference.html#FifoWorker

http://www.boost.org/doc/libs/1_51_0/libs/statechart/doc/reference.html#fifo_scheduler.hpp

imreal
  • 10,178
  • 2
  • 32
  • 48