1

I just want to know why Non Blocking concurrency is better than Blocking concurrency. In Blocking Concurrency Your thread must wait till other thread completes its execution. So thread would not consuming CPU in that case.

But if I talk about Non Blocking Concurrency, Threads do not wait to get a lock they immediately returns if certain threads is containing the lock.

For Example in ConcurrentHashMap class , inside put() method there is tryLock() in a loop. Other thread will be active and continuously trying to check if lock has been released or not because tryLock() is Non Blocking. I assume in this case, CPU is unnecessary used.

So Is it not good to suspend the thread till other thread completes its execution and wake the thread up when work is finished?

user31601
  • 2,482
  • 1
  • 12
  • 22
sdindiver
  • 491
  • 1
  • 5
  • 19
  • It depends if you want (for instance) your whole program to be blocked because one thing isn't finished yet. – khelwood Jun 06 '18 at 09:45
  • ConcurrentHashMap does not lock the whole collection, so it is possible for 2+ threads work with this type of collections, it improves performance in comparison with synchronized hashmap for instance – Dmitri Gudkov Jun 06 '18 at 09:46

2 Answers2

3

Whether or not blocking or non-blocking concurrency is better depends on how long you expect to have to wait to acquire the resource you're waiting on.

With a blocking wait (i.e. a mutex lock, in C parlance), the operating system kernel puts the waiting thread to sleep. The CPU scheduler will not allocate any time to it until after the required resource has become available. The advantage here is that, as you said, this thread won't consume any CPU resources while it is sleeping.

There is a disadvantage, however: the process of putting the thread to sleep, determining when it should be woken, and waking it up again is complex and expensive, and may negate the savings achieved by not having the thread consume CPU while waiting. In addition (and possibly because of this), the OS may choose not to wake the thread immediately once the resource becomes available, so the lock may be waited on longer than is necessary.

A non-blocking wait (also known as a spinlock) does consume CPU resource while waiting, but saves the expense of putting the thread to sleep, determining when it should be woken, and waking it. It also may be able to respond faster once the lock becomes free, as it is less at the whim of the OS in terms of when it can proceed with execution.

So, as a very general rule, you should prefer a spinlock if you expect to only wait a short time (e.g. the few CPU cycles it might take for another thread to finish with an entry in ConcurrentHashMap). For longer waits (e.g. on synchronized I/O, or a number of threads waiting on a single complex computation), a mutex (blocking wait) may be preferable.

user31601
  • 2,482
  • 1
  • 12
  • 22
  • The phrase "non-blocking wait" makes no sense. "Waiting" and "blocked" both mean the same thing. A thread that is waiting for a spin-lock is "blocked" because it can not make any progress until some other thread releases the lock. You can not call an algorithm that uses a spin-lock "non-blocking" because if you suspend or terminate a thread that has acquired the lock, that will prevent other threads that need to acquire the lock from making any progress. https://en.wikipedia.org/wiki/Non-blocking_algorithm – Solomon Slow Jun 06 '18 at 13:37
  • @jameslarge I agree, and I was nervous about using these terms. However, I decided to continue with the terminology used by the OP as I believe it is clear from the question what they are alluding to. – user31601 Jun 06 '18 at 13:52
  • I don't have time to understand the original question. I don't know what ConcurrentHashMap.java source code the OP is asking about. The OpenJDK version does not appear to use tryLock(). – Solomon Slow Jun 06 '18 at 14:16
  • Spin locks, as you say, can be efficient on a multi-CPU system if no thread ever holds the lock for more than a very short time; but if there is still any single-CPU platform out there, then you most definitely would NOT want to use spin-locking on it. I generally trust that the platform implementation of `ReentrantLock` will do something that is reasonably efficient for most patterns of use. I would only implement some kind of custom locking if actual performance measurements proved that I could do better. – Solomon Slow Jun 06 '18 at 14:25
  • Fair enough, but that's not what the OP asked about. They asked why would you _ever_ perform a non-blocking `tryLock()` in a tight loop; I proposed an explanation. Out of interest, why would you not use spinlocks on single CPU systems? Most systems have far more running threads & processes than cores, so a spinlock is always going to be occupying scheduler time-slices that could have been allocated to other threads. How does 1 CPU core change that? – user31601 Jun 06 '18 at 15:13
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/172610/discussion-between-user31601-and-james-large). – user31601 Jun 06 '18 at 15:24
  • 1
    @user31601 So should I assume, my question must be **advantage of spin lock over synchronized** rather than **why non blocking concurrency is better than blocking concurrency**? Because as per above discussion it is clear spin lock is using non blocking data structure (tryLock()) but because it is used in tightly loop, so it is behaving as blocking algorithm. and only we want to block for short period of time on multi core processor, So it is preferable to use spin lock than synchronized. – sdindiver Jun 06 '18 at 17:48
  • @sdindiver Yes - that sums thing up nicely! – user31601 Jun 07 '18 at 08:56
  • [The JVM may use spin-locks to implement `synchronized`](https://stackoverflow.com/questions/20689718/what-is-adaptive-spinning-w-r-t-lock-acquisition). – Raedwald Dec 20 '18 at 15:19
0

If you consider ConcurrentHashMap as an example , considering the overhead due to multiple threads performing update operations (like put) , and block waiting for the locks to release (as you mention other thread will be active and continuously trying to check if lock has been released), is not going to be the case,always.

Compared to HashTable , Concurrency control in ConcurrentHashMap is split up. So multiple threads can acquire lock(on segments of the table).

Originally, the ConcurrentHashMap class supports a hard-wired preset concurrency level of 32. This allows a maximum of 32 put and/or remove operations to proceed concurrently(factors other than synchronization tend to be bottlenecks when more than 32 threads concurrently attempt updates.)

Also, successful retrievals (when the key is present) using get(key) and containsKey(key) usually run without locking.

So for instance, one thread might be in the process of adding an element, what cannot be done with such a locking strategy is operations like add an element only if it is not already present (ConcurrentReaderHashMap provides such facilities).

Also, the size() and isEmpty() methods require accumulations across 32 control segments, and so might be slightly slower.

JineshEP
  • 738
  • 4
  • 7