30

I have an application, where some STL containers are read in 3 threads, and written in 2. I know there is TBB for multi-threaded containers, but it is not an option in my application.

So I want to make the program thread-safe using std::mutex and my bare hands. This is a simple version of what I did:

int readers = 0;
std::mutex write;

// One write, no reads.
void write_fun()
{
    write.lock();// We lock the resource
    while(readers > 0){}// We wait till everyone finishes read.
    // DO WRITE
    write.unlock();// Release
}

// Multiple reads, no write
void read_fun()
{
    // We wait if it is being written.
    while(!write.try_lock()){}
    write.unlock();

    readers++;
    // do read
    readers--;
}

Is this the correct way to do this in C++11?

VSZM
  • 1,341
  • 2
  • 17
  • 31

2 Answers2

48

Pretty close, couple things to note, in c++ for exception safety and readability, IMO, it is good to use RAII locks. What you really need is a shared_mutex like in boost or coming in c++14.

std::shared_mutex write; //use boost's or c++14 

// One write, no reads.
void write_fun()
{
    std::lock_guard<std::shared_mutex> lock(write);
    // DO WRITE
}

// Multiple reads, no write
void read_fun()
{
    std::shared_lock<std::shared_mutex> lock(write);
    // do read
}

If you don't want to use boost @howardhinmant was do kind as to give a link to a reference implementation

aaronman
  • 18,343
  • 7
  • 63
  • 78
  • 1
    You've got the wrong mutex type on the `lock_guard` and `shared_lock` (should be `std::shared_mutex`). Also here is a reference implementation: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2406.html#shared_mutex_imp – Howard Hinnant Nov 11 '13 at 20:52
  • @HowardHinnant fixed, man I must be out of it today, messing everything up, I'll add the reference implementation to the answer too – aaronman Nov 11 '13 at 20:53
  • +1 Thanks for your great answer. Well I guess this is the point where I can not prolong adding this huge dependency to my project. – VSZM Nov 11 '13 at 21:33
  • @VSZM did you see the link to a reference implementation at the bottom, I don't think it's that large (you don't need to add boost) – aaronman Nov 11 '13 at 22:11
  • Yes, I did see it, however I should have added boost a long time ago, there were quite a lot of other implementations I could have spared by adding it. Not going to prolong it any further. And it turns out, that it was just a few clicks with NuGet. – VSZM Nov 12 '13 at 06:47
  • 3
    I'm afraid `std::shared_mutex` is (will be) in c++17, not c++14. – Messa Aug 21 '16 at 16:02
8

This is safe, but still likely not fair or performant:

std::atomic<int> readers;
std::mutex write;

// One write, no reads.
void write_fun()
{
    write.lock();// We lock the resource
    while(readers > 0){}// We wait till everyone finishes read.
    // DO WRITE
    write.unlock();// Release
}

// Multiple reads, no write
void read_fun()
{
    // We wait if it is being written.
    write.lock();
    readers++;
    write.unlock();

    // do read
    readers--;
}

A solution with condition variables could avoid busy waiting for readers to fall to 0, left as an exercise for the reader.

Casey
  • 41,449
  • 7
  • 95
  • 125
  • 1
    I believe that this algorithm is prone to writer starvation. If the readers keep coming back often enough, the writer will never see `readers` drop to zero. The best algorithm I've seen is a 2-gate system designed by Alexander Terekhov, and used in this implementation: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2406.html#shared_mutex_imp It prohibits both reader and writer starvation. – Howard Hinnant Nov 11 '13 at 20:58
  • 1
    @HowardHinnant I won't defend this as a robust general solution, the intention was to make the smallest possible change to provide safety to the OP's approach. Writer starvation is not an issue, however - new readers cannot get access once a writer locks the mutex. – Casey Nov 11 '13 at 21:01
  • +1 Thank you, now I see the mistakes in my code. I very much like this answer too. Too bad I can only choose one. – VSZM Nov 11 '13 at 21:36