The OP's question and the comment exchanges appear to contain quite a bit of confusion. I will avoid answering the literal questions and instead try to give an overview.
Why does java.util.concurrent
become today's recommended practice?
Because it encourages good application coding patterns. The potential performance gain (which may or may not materialize) is a bonus, but even if there is no performance gain, java.util.concurrent
is still recommended because it helps people write correct code. Code that is fast but is flawed has no value.
How does java.util.concurrent
encourage good coding patterns?
In many ways. I will just list a few.
(Disclaimer: I come from a C# background and do not have comprehensive knowledge of Java's concurrent package; though a lot of similarities exist between the Java and C# counterparts.)
Concurrent data collections simplifies code.
- Often, we use locking when we need to access and modify a data structure from different threads.
- A typical operation involves:
- Lock (blocked until succeed),
- Read and write values,
- Unlock.
- Concurrent data collections simplify this by rolling all these operations into a single function call. The result is:
- Simpler code on the caller's side,
- Possibly more optimized, because the library implementation can possibly use a different (and more efficient) locking or lock-free mechanism than the JVM object monitor.
- Avoids a common pitfall of race condition: Time of check to time of use.
Two broad categories of concurrent data collection classes
There are two flavors of concurrent data collection classes. They are designed for very different application needs. To benefit from the "good coding patterns", you must know which one to use given each situation.
- Non-blocking concurrent data collections
- These classes can guarantee a response (returning from a method call) in a deterministic amount of time - whether the operation succeeds or fails. It never deadlocks or wait forever.
- Blocking concurrent data collections
- These classes make use of JVM and OS synchronization features to link together data operations with thread control.
- As you have mentioned, they use sleep locks. If a blocking operation on a blocking concurrent data collection is not satisfied immediately, the thread requesting this operation goes into sleep, and will be waken up when the operation is satisfied.
There is also a hybrid: blocking concurrent data collections that allow one to do a quick (non-blocking) check to see if the operation might succeed. This quick check can suffer from the "Time of check to time of use" race condition, but if used correctly it can be useful to some algorithms.
Before the java.util.concurrent
package becomes available, programmers often had to code their own poor-man's alternatives. Very often, these poor alternatives have hidden bugs.
Besides data collections?
Callable
, Future
, and Executor
are very useful for concurrent processing. One could say that these patterns offer something remarkably different from the imperative programming paradigm.
Instead of specifying the exact order of execution of a number of tasks, the application can now:
Callable
allows packaging "units of work" with the data that will be worked on,
Future
provides a way for different units of work to express their order dependencies - which work unit must be completed ahead of another work unit, etc.
- In other words, if two different
Callable
instances don't indicate any order dependencies, then they can potentially be executed simultaneously, if the machine is capable of parallel execution.
Executor
specifies the policies (constraints) and strategies on how these units of work will be executed.
One big thing which was reportedly missing from the original java.util.concurrent
is the ability to schedule a new Callable
upon the successful completion of a Future
when it is submitted to an Executor
. There are proposals calling for a ListenableFuture
.
(In C#, the similar unit-of-work composability is known as Task.WhenAll
and Task.WhenAny
. Together they make it possible to express many well-known multi-threading execution patterns without having to explicitly create and destroy threads with own code.)