Forget about std::lock_guard
for a while. It's just convenience (a very useful one, but still just convenience). The synchronisation primitive is the mutex itself.
Mutex is an abbreviation of MUTual EXclusion. It's a synchronisation primitive which allows for one thread to exclude other threads' access to whatever is protected by a mutex. It's usually shared data, but it can be anything (a piece of code, for example).
In your case, you have data which is shared between two threads. To prevent potentially disastrous concurrent access, all accesses to that data must be protected by something. A mutex is a sensible thing to use for this.
So you conceptually bundle your data with a mutex, and whenever any code wants to access (read, modify, write, delete, ...) the data, it must lock the mutex first. Since no more that one thread can ever have a mutex locked at any one time, the data access will be synchronised properly and no race conditions can occur.
With the above, all code accessing the data would look like this:
mymutex.lock();
/* do whatever necessary with the shared data */
mymutex.unlock();
That is fine, as long as
- you never forget to correctly match
lock
and unlock
calls, even in the presence of multiple return paths, and
- the operations done while the mutex is locked do not throw exceptions
Since the above points are difficult to get right manually (they're a big maintenance burden), there's a way to automate them. That is the std::lock_guard
convenience we put aside at start. It's just a simple RAII class which calls lock()
on the mutex in its constructor and unlock()
in its destructor. With a lock guard, the code for accessing shared data will look like this:
{
std::lock_guard<std::mutex> g(mymutex);
/* do whatever necessary with the shared data */
}
This guarantees that the mutex will correctly be unlocked when the operation finishes, whether by one of potentially many return
(or other jump) statements, or by an exception.