0

Looking at the official .net client code, on several places, I saw lock's statements. This issued an internal question on how much does that impact performance.

My current solution is a web-app that is using graylog for logging, and its sink is a rabbit queue. A single critical path request can result on several dozens of logs alone, and ideally it should run on 500ms. On peak moments we´re expecting to handle 3-5 of those requests and 1-2 hundreds of others each second.

Right now, the connection and the model are basically singletons and my question is: how worried should i be about those locks when we hit heavy load? are there know deadlocks spots?

Leonardo
  • 10,737
  • 10
  • 62
  • 155

1 Answers1

0

In general, the locks itself are relatively cheap, as can be read here: How expensive is the lock statement?

Short answer: 50ns

So the actual question is: what part is actually locked and does it matter?

My assumption is that it is the part where a message is being published to the queue (although it would help if you elaborate on that).

So, I didn't dive into the code, but since it's purely on the client side, you should be able to horizontal scale those clients with no difficulties.

Stefan
  • 17,448
  • 11
  • 60
  • 79
  • horizontal scale is not my concern... my concern is having the app on a 16 core boxes, running 200 threads, and having them locking themselves out transforming a concurrent code on a sequential one... of course that opening/closing a connection/model per request alone breaks the 700ms threshold... – Leonardo Nov 26 '18 at 18:04
  • @Leonardo did you benchmark it? If you are afraid that the threads will deadlock each other, make them write to a ConcurrentQueue that is read by Thread201, and make only Thread201 write to RabbitMQ. – Alexander Pope Nov 26 '18 at 19:25
  • How CPU intensive is the parallel workload? If the work done by one thread can saturate a single core, then spawning 200 threads will massively reduce performance due to context switching. In this case you should spawn at most 16 threads, or let the ThreadPool do it for you. – Alexander Pope Nov 26 '18 at 19:33
  • @AlexanderPope so far the app has been running OK with 200 worker threads and 300 IO threads... when reaching 150-170 the other metrics trigger auto-scale and a new box pops... the problem with the concurrentQueue is that, in case of a crash, vital log data might be lost – Leonardo Nov 26 '18 at 20:19
  • @Leonardo: can you elaborate on the lock statement? What exactly does it lock? I know it isn't the message processing. So I gather it would be the publishing, but I which you would tell us because my guessing skills arn't that high ;-) – Stefan Nov 26 '18 at 20:22
  • And... are you only worrying about deadlocks or also about throughput? – Stefan Nov 26 '18 at 20:43
  • @AlexanderPope there´s not a specific one that worries me... it's just that there are quite a few around, and i'm worried about throughput... so far my tests are good... these locks seem to be affecting only opening connections and creating models... – Leonardo Nov 27 '18 at 12:28
  • @Leonardo I doubt RabbitMQ will be your bottleneck. Regardless, you can easily benchmark it by spawning 200 writer threads and take it from there. If you notice a bottleneck, than raise a new SO question on how to mitigate it. – Alexander Pope Nov 27 '18 at 13:13
  • 1
    @Leonardo: there is some documentation about tweaking some setting for typical usages: https://www.rabbitmq.com/networking.html – Stefan Nov 27 '18 at 13:42