1

I've been facing a few race conditions in my code lately and I'm wondering if secondary/tertiary locks actually add any value. They seem very redundant, and this SO post seems to align with my thought process on it by stating:

In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only one thread can access the data at a time. This would mean something like this:

obtain lock for x ... release lock for x

Given this simple example to remove empty queues from a collection:

Dictionary<Guid, Queue<int>> _queues = new Dictionary<Guid, Queue<int>>();

...

lock (_queues) {
    while (_queues.Any(a => a.Value.Count == 0)) {
        Guid id = _queues.First(f => f.Value.Count == 0);
        if (_queues.ContainsKey(id))
            lock (_queues)
                _queues.Remove(id);
    }
}

Does the second lock provide any value?

Hazel へいぜる
  • 2,751
  • 1
  • 12
  • 44
  • 1
    I would say no. If one thread enter the first lock, no other thread will be able to enter that code block. So this will not help you solve your race condition. You could try to change it to a `ConcurrentQueue` instead. – Michael Jan 21 '21 at 20:35

3 Answers3

4

In the code that you posted, no, the second lock does not add any value, as it is not possible to get to that code without executing the while loop first, at which point the mutex will already have been locked.

It is probably important to note that it doesn't hurt to have it in there, though, as C# locks are reentrant.

Where the second lock does add value is in code where it is not clear that the first lock will always be obtained. For example:

void DoWork1()
{
    lock(_queues)
    {
        //do stuff...
        if(condition)
            DoWork2();
    }
}

void DoWork2()
{
    lock(_queues)
    {
        //do stuff...
    }
}
mnistic
  • 10,866
  • 2
  • 19
  • 33
3

From the language specification:

While a mutual-exclusion lock is held, code executing in the same execution thread can also obtain and release the lock. However, code executing in other threads is blocked from obtaining the lock until the lock is released

So the inner lock is allowed, but have no practical effect since it is already in a critical section.

To avoid deadlocks I would recommend being very restrictive about what code you are calling while holding a lock. Using framework collections is probably fine, but if you call some arbitrary method it might try to take another lock. This is fairly easy to do with events, you raise the event, that does some other thing, and 20 calls later it tries to take a lock. Using Task.Result or Task.Wait are other potential causes of deadlocks.

Instead of locks it is sometimes possible to use concurrent collections. Another approach is to use an exclusive task scheduler to ensure some resource is never used by more than one thread.

JonasH
  • 28,608
  • 2
  • 10
  • 23
3

Does the second lock provide any value?

The answer is obviously no, your intuitions are correct, but let's rationalize why.

The way a lock works is by calling Monitor.Enter/Exit.

  • When you take a lock the CLR marks the header of the object (or its link to the sync table) atomically with a thread Id and a counter.

  • If another thread tries to lock over the object with an existing Thread ID

    1. It will perform a small spin wait (thin lock) to efficiently await the release without a context switch
    2. Failing that, it will take a more aggressive approach to promote the lock to an event with a operating system and wait on that handle (fat lock).
  • Each time the same thread calls the same lock on the same object, it (in essence) just increments the counter of the object in the header or (sync block) and continues unabated until it exits the lock (Monitor.Exit) at which point it decrements the counter. So on and so forth until the counter is zero.

So... do two nested locks on the same object in the same body of single threaded code achieve anything? Well the answer is no, apart from incrementing the lock counter (at a small cost).

So you might ask the question, what is the use of all this counter'ing? Well, it's actually for more complicated reentrant code scenarios, or where you have branching that may redirect the code to a lock over the same object... In that case, the same thread will not block on the same lock, in turn negating a very real deadlock situation.

Note : There are some components of how internal locking work that is CLR implementation and operating system specific, however the ECMA specs guarantees a certain consistent implementation of how these synchronization primitives' need to work in regard to behavior, reentrance, structural/emitted code and jit reordering.


Additional resources

For all the gory details you can see my answer here

Why Do Locks Require Instances In C#?

halfer
  • 19,824
  • 17
  • 99
  • 186
TheGeneral
  • 79,002
  • 9
  • 103
  • 141