7

It is not clear to me how mutex and lock is working.

I have one object (my_class) and I adding, deleting and read data from object in main thread. And in my second thread I want check some data from my object. Problem is, during reading data from second thread, it can lead to crash application when I delete object in main thread.

Therefore I created std::lock_guard<std::mutex> lock(mymutex) inside my second thread.

I create test and with this lock_guard it never crash. But I don't know if I need use lock in main thread too.

Question is, what happens when second thread lock mutex and read the data and main thread wants delete the data from object but there is no lock? And otherwise what happens when second thread want to lock mutex and read data from object while main thread deleting data from object?

Wanderer
  • 319
  • 1
  • 3
  • 12
  • 4
    `std::lock_guard` is simply a RAII wrapper to lock your mutex and automatically unlock the mutex when it goes out of scope. So that you don't have do it manually. How and where to use it is a bit too broad. – Ron Jun 14 '18 at 11:43
  • 1
    You need both locks – Killzone Kid Jun 14 '18 at 11:43
  • Does [this](https://stackoverflow.com/q/34524/212858) sufficiently answer your question? Your confusion seems to boil down to simply not knowing what a mutex is for in the first place. – Useless Jun 14 '18 at 11:50

2 Answers2

27

Forget about std::lock_guard for a while. It's just convenience (a very useful one, but still just convenience). The synchronisation primitive is the mutex itself.

Mutex is an abbreviation of MUTual EXclusion. It's a synchronisation primitive which allows for one thread to exclude other threads' access to whatever is protected by a mutex. It's usually shared data, but it can be anything (a piece of code, for example).

In your case, you have data which is shared between two threads. To prevent potentially disastrous concurrent access, all accesses to that data must be protected by something. A mutex is a sensible thing to use for this.

So you conceptually bundle your data with a mutex, and whenever any code wants to access (read, modify, write, delete, ...) the data, it must lock the mutex first. Since no more that one thread can ever have a mutex locked at any one time, the data access will be synchronised properly and no race conditions can occur.

With the above, all code accessing the data would look like this:

mymutex.lock();
/* do whatever necessary with the shared data */
mymutex.unlock();

That is fine, as long as

  1. you never forget to correctly match lock and unlock calls, even in the presence of multiple return paths, and
  2. the operations done while the mutex is locked do not throw exceptions

Since the above points are difficult to get right manually (they're a big maintenance burden), there's a way to automate them. That is the std::lock_guard convenience we put aside at start. It's just a simple RAII class which calls lock() on the mutex in its constructor and unlock() in its destructor. With a lock guard, the code for accessing shared data will look like this:

{
  std::lock_guard<std::mutex> g(mymutex);
  /* do whatever necessary with the shared data */
}

This guarantees that the mutex will correctly be unlocked when the operation finishes, whether by one of potentially many return (or other jump) statements, or by an exception.

Unni
  • 5,348
  • 6
  • 36
  • 55
Angew is no longer proud of SO
  • 167,307
  • 17
  • 350
  • 455
  • Note: In your case the "shared data" includes the pointer to your object. You need to aquire the mutex before deleting the object and set the pointer to nullptr. And in the thread aquire the mutex and check that the object != nullptr before you use it. – Goswin von Brederlow Jun 14 '18 at 12:08
  • If I understand, mutex must be on both threads, which working with same object, right? Now I extend my question because I forgot to ask, what happens when mutex is locked on one thread and second try lock? It will be waiting for unlocking? If yes, how can I prevent to not waiting and continue in code (of course continued code skip the code which is bundled in mutex lock and unlock)? – Wanderer Jun 14 '18 at 13:24
  • @Wanderer Yes, the mutex must be locked on both threads. Yes, `lock` blocks until the mutex becomes available. There is also `try_lock` on the mutex, which allows you to do something else when the mutex is not available for locking. – Angew is no longer proud of SO Jun 14 '18 at 13:27
  • Re, "it can be anything (a piece of code, for example)." Code is virtually never mutable these days. If it's not mutable, then there's no need to protect it. The only reason to keep multiple threads from calling the same routine at the same time is to prevent them from accessing whatever _data_ that routine operates on at the same time. – Solomon Slow Jun 14 '18 at 13:46
  • @jameslarge I didn't mean *mutating* code, of course. I meant *executing* it. And while shared data is by far the most common reason for mutual exclusion, there could be others too (e.g. shared physical resource). – Angew is no longer proud of SO Jun 14 '18 at 13:53
  • @Angew, sure, data _or other resources_. My point was that the code is never the thing that needs protection. If you say that the `foobar()` function needs to be called in a mutex, that may be a useful rule for somebody to know who's calling your API, but it hides what the mutual exclusion actually is protecting. – Solomon Slow Jun 14 '18 at 13:58
  • @Wanderer, re `try_lock`. IMO that's an advanced topic. If you've got a problem to solve, the solution that uses `try_lock` _might_ perform slightly better than some other solution that doesn't use it, but it will be harder to understand, harder to maintain, harder to explain, etc. – Solomon Slow Jun 14 '18 at 14:00
  • 1
    Just to echo what Solomon said, if your thread can't get a lock on some data, it probably SHOULD wait until it can. If it moves on to do something else, there's too much unrelated stuff going on in that thread. Threads, like functions, should generally be dedicated to a single task. And on the other side of the coin, locks should be short-lived. Don't lock an entire function that does 20 accesses to a shared variable. Do the bulk of the work using local copies, then lock, update, and release. That way, the wait for a lock will always be short, and waiting will always be the right choice. – FeRD Dec 27 '20 at 20:24
3

std::lock_guard<<std::mutex> is a short cut as mentioned above, but crucial for concurrent control flows, which you always have them when a mutex make sense, at all!

In case the protected block raises an exception, that is not treated inside block itself, the fragile pattern

mymutex.lock();
/* do anything but raising an exception here! */
mymutex.unlock();

will not unlock the mutex and some other control flow waiting for the mutex might be stuck in a dead lock.

The robust pattern

{
    std::lock_guard<std::mutex> guard(mymutex);
    /* do anything here! */
}

will anyway perform an unlock on mymutex, when the block is left.

The other relevant use case is synchronized access to some attribute

int getAttribute()
{
    std::lock_guard<std::mutex> guard(mymutex);
    return attribute;
}

Here, without lock_guard, you need to assign the return value to some other variable, before you can unlock the mutex, which is two more steps and again does not handle exceptions.

Sam Ginrich
  • 661
  • 6
  • 7