2

I suppose this is more a general question about lvalue instantiation ordering.

In short, is this safe?:

void func1()
{
    std::lock_guard< std::mutex > lock( mutex );
    //do some stuff in locked context
}

void func2()
{
    func1();
    std::lock_guard< std::mutex > lock( mutex );
    //do some stuff in locked context
}

I am somewhat concerned that the compiler may call the constructor of the lock_guard before calling func1 from within func2, thus causing a deadlock.

Is it guaranteed that this is safe or do I need to do something like this:

void func1()
{
    std::lock_guard< std::mutex > lock( mutex );
    //do some stuff in locked context
}

void func2()
{
    func1();

    { //lock
        std::lock_guard< std::mutex > lock( mutex );
        //do some stuff in locked context
    } //unlock
}
Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
Nathan Owen
  • 155
  • 10
  • 4
    What is the basis for your concerns? – Kerrek SB Jan 30 '18 at 01:06
  • Based only on experience, and not verbiage in the standard, I think it is completely safe. – ttemple Jan 30 '18 at 01:10
  • In the absence of loops, execution is pure top to bottom. That includes object construction. In both snippets, `func1` will always be called before the object `lock` is constructed. In other words, it is safe. In both cases. – Some programmer dude Jan 30 '18 at 01:10
  • I am not an expert on how the compiler/optimizer operates so I do not sure if the compiler will first instantiate all objects within the function and then start executing the function or if it will perform the instantiation in the order that I have written the code in. Obviously function calls happen in order I placed them, however I could see the compiler/optimizer moving the lvalue instantiation to the top of the function in some cases. I don't suspect it does as many people would likely have trouble with this if it did, I just want to be sure this wont happen. – Nathan Owen Jan 30 '18 at 01:17
  • Thanks Some programmer dude. That is what I believe, just wanted to be sure. Could someone point me to where this is specified in the standard? – Nathan Owen Jan 30 '18 at 01:20
  • Handy reading: [What exactly is the “as-if” rule?](https://stackoverflow.com/questions/15718262/what-exactly-is-the-as-if-rule). In this case the behaviour resulting from the compiler moving the lock is very, very visible. – user4581301 Jan 30 '18 at 01:25
  • 1
    If you're not an expert, you should generally not be thinking about the compiler at all. Just follow the rules of the language and trust that your tools are generally functional and effective. – Kerrek SB Jan 30 '18 at 01:26

2 Answers2

5

Those things that you're describing (the function call and the instantiation of the lock) are known as full-expressions in the standard.

As per C++11 1.9 Program execution /14 (same location and text in C++14, same text in C++17 4.6 Program execution /16):

Every value computation and side effect associated with a full-expression is sequenced before every value computation and side effect associated with the next full-expression to be evaluated.

There are cases where seemingly sequential operations can be indeterminately sequenced, but this is not one of them.


As an aside, if you're worried about the possibility that a single thread of execution may attempt to re-acquire the same mutex twice, that's a situation where you may find recursive_mutex coming in handy.


As a further aside, on your comment asking about C++98 and C++03, threads were only introduced in C++11. Before then, C++ still used the concept of sequence points as per C.

In C++98 1.9 Program execution /16 and C++03 1.9 Program execution /16, you'll find similar wording:

There is a sequence point at the completion of evaluation of each full-expression.

paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • Thanks paxdiablo, that is the answer I was looking for. – Nathan Owen Jan 30 '18 at 01:58
  • btw, hypothetically, if I were actually using C++98 and implementing my own lock_guards, I would assume this apply to C++98 as well. – Nathan Owen Jan 30 '18 at 02:02
  • @Nathan, threads were only introduced into C++11. Before then, it used the same wording as C, a la 'sequence points'. I've updated the answer to include this, though I'm not sure if I could confidently aver to your sanity in adding stuff like that to the rather ancient implementations :-) – paxdiablo Jan 30 '18 at 02:56
  • @paxdiablo I just wonder what _"is sequenced"_ means. Consider `a++; b++;` where `a` and `b` are `int`s. My opinion is that both compiler and CPU can reorder corresponding `inc` instructions such that `b` is incremented first (if it maintains the effect of the whole program). – Daniel Langr Jan 30 '18 at 07:56
  • @Daniel, those are full expressions and must be sequenced as such, and the standard goes into *painful* detail as to what sequenced means :-) Consider `a++; a *= 2;`. The C++ "vm" is not allowed to resequence. Whether the underlying hardware does is not within the scope of the standard, as long as the effect is what you'd expect. – paxdiablo Jan 30 '18 at 10:52
  • @paxdiablo Thanks for the additional information. I think the sanity ship left port a long time ago. Pushing to get C++17 by Q3 2018, will require a fair bit of work by my team to switch toolchains. Our chip vendor did not support C++11 until last year and we have not yet had time. Besides, I never shy away from a little template programming :-) – Nathan Owen Jan 30 '18 at 16:59
0

Just some additional notes to @paxdiablo's answer. Generally, execution of a resulting program on a particular architecture must have the same effect as is written in the source code according to C++ Standard. Due to optimizations, both compiler and CPU is allowed to generate and execute instructions in different order than the order that matches source code.

However, when multithreaded synchronization constructs are used (such as mutexes and atomic memory operations), such reordering must not happen (if not relaxed by programmer explicitly). Otherwise, multithreaded programming would not be feasible at all.

Compiler knows that it should not reorder instructions if it sees such code. On a CPU level, there are memory barriers for this task.

Therefore, if you lock a mutex, you can be sure that the code written above has already completed. This is generic multithreading concept, so I strongly believe it is valid for C++ as well. Please, correct me if I am wrong.

Daniel Langr
  • 22,196
  • 3
  • 50
  • 93