10

Does using a lock have better performance than using a local (single application) semaphore?

I read this blog from msdn : Producer consumer solution on msdn

and I didn't like their solution to the problem because there are always 20 elements left in the queue.

So instead, I thought about using a 'Semaphore' that will be available only in my app (I just won't name it in the constructor), but I don't know how it will effect the app's performance.

Does anyone have an idea if it'll affect the performance? What are the other considerations to use a lock and not 'Semaphore'?

Brenton Scott
  • 153
  • 3
  • 13
Adibe7
  • 3,469
  • 7
  • 30
  • 36

5 Answers5

15

Lock(obj) is the same as Monitor.Enter(obj); A lock is basicaly an unary semaphore. If you have a number of instances of the same ressource (N) you use a semaphore with the initialization value N. A lock is mainly used to ensure that a code section is not executed by two threads at the same time.

So a lock can be implemented using a semaphore with initialization value of 1. I guess that Monitor.Enter is more performant here but I have no real information about that. A test will be of help here. Here is a SO thread that handels about performance.

For your problem a blocking queue would be the solution. (producer consumer) I suggest this very good SO thread.

Here is another good source of information about Reusable Parallel Data Structures.

Community
  • 1
  • 1
schoetbi
  • 12,009
  • 10
  • 54
  • 72
  • Please note: a sempahore (at operating system level) uses the far faster hardware lock, so implementing a lock using a semaphore seems a bit contraproductive. – Offler Jan 11 '13 at 10:52
  • 2
    Lock is in fact different than semaphore, considering ie reentrancy: lock is reentrant, while semaphore is not. Semaphore can be however released within other thread. – Kędrzu Mar 20 '15 at 13:07
14

TLDR I just ran my own benchmark and in my setup, it seems that lock is running almost twice as fast as SemaphoreSlim(1).

Specs:

  • .NET Core 2.1.5
  • Windows 10
  • 2 physical cores (4 logical) @2.5 GHz

The test:

I tried running 2, 4 and 6 Tasks in parallel, each of them doing 1M of operations of accessing a lock, doing a trivial operation and releasing it. The code looks as follows:

await semaphoreSlim1.WaitAsync();
// other case: lock(obj) {...}

if(1 + 1 == 2)
{
    count++;
}        

semaphoreSlim1.Release();

Results For each case, lock ran almost twice as fast as SemaphoreSlim(1) (e.g. 205ms vs 390ms, using 6 parallel tasks).

Please note, I do not claim that it is any faster on an infinite number of other setups.

eddyP23
  • 6,420
  • 7
  • 49
  • 87
  • Rather one should compare lock vs. SpinLock. For more information look for "PerformanceCharacteristicsOfSyncPrimitives.pdf" here: https://www.microsoft.com/en-us/download/details.aspx?id=12594 – KarloX Jan 17 '19 at 10:12
  • 12
    The `if` will be optimized away, so it's useless. I'd also use regular `Wait` instead of `WaitAsync`, as that has the potential for thread switches. Which we do not want to measure. – mafu Apr 02 '20 at 04:09
3

In general: If your consumer thread manages to process each data item quickly enough, then the kernel-mode transition will incur a (possibly significant) bit of overhead. In that case a user-mode wrapper which spins for a while before waiting on the semaphore will avoid some of that overhead.

A monitor (with mutual exclusion + condition variable) may or may not implement spinning. That MSDN article's implementation didn't, so in this case there's no real difference in performance. Anyway, you're still going to have to lock in order to dequeue items, unless you're using a lock-free queue.

wj32
  • 8,053
  • 3
  • 28
  • 37
3

lock and semaphore slim are entirely different, and I would avoid mixing them. SemaphoreSlim is good if your using async properly.

If your lock surrounds an await call, remember the thread that picks up running after an await will not necessarily the same thread that called it. Its likely to be the same threadcontext but that is a different thing.

Also if your method returns a Task and your method contains a lock, then you will find one of the threadpool threads used to run that task will get blocked until the lock is freed, so you could end up starving any arbitary Task somewhere else in your program of the threads they need to run.

andrew pate
  • 3,833
  • 36
  • 28
  • 4
    The C# language reference says explicitly that "You can't use the await operator in the body of a lock statement." https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/lock-statement – Anders Emil Feb 10 '21 at 11:36
  • 1
    @AndersEmil Good point but you don't need to be using a lock statement to be locking an object. You could be using System.Threading.Monitor ... In which case mixing async and locking together is probably going to end badly, Using SemaphoreSlim instead will probably end better. – andrew pate Feb 13 '21 at 16:08
1

The solution in the MSDN article has a bug where you'll miss an event if SetEvent is called twice by the producer in quick succession whilst the consumer is processing the last item it retrieves from the queue.

Have a look at this article for a different implementation using Monitor instead:

http://wekempf.spaces.live.com/blog/cns!D18C3EC06EA971CF!672.entry

theburningmonk
  • 15,701
  • 14
  • 61
  • 104
  • Thanks for the answer, but your post has a different bug.. The check in the consumer that checks if the queue size not equals to zero is problematic. example : the queue holds one product. two clients do the check simultaneously and enter the while loop, one locks the monitor and the other one is waiting. the first one consumes the product (so there are no products left!), and then exits the monitor. Now the second one enters the monitor, and try to dequeue an empty queue.. fatal error. – Adibe7 Aug 15 '10 at 22:52
  • Actually, when the second consumer gets woken up in the inner while loop and reacquires the lock it'll continue the while loop and check whether the queue is empty before it decides to continue and dequeue from the queue so you won't get the error you just mentioned. However, as the author did mention it's only a partial implementation and it's not doing anything with the dequeued item. – theburningmonk Aug 16 '10 at 07:06