Summary
From my studies, I don't remember that a concept such "uninterruptible block" exists, and I did not find it either with a quick Google search.
Expected answer
- yes, it does exist, and the proper term for that is ... (in this case, it would be nice, if someone could explain me, why it does not exist in Java)
- no, it does not exist, because ...
Definition
By "uninterruptible block", I mean a section of code, in a multi-threading context, which, once starts execution, cannot be interrupted by other threads. I.e., the CPU (or the JVM), won't run any other thread at all, until the "atomic block" is left. Note, that this is not the same as a section marked by lock/mutex/... etc., because such section can not be interrupted only by other threads, which acquire the same lock or mutex. But other threads can still interrupt it.
EDIT, in response to comments It would be fine also, if it affected only the threads of the current process. RE. multiple cores: I would say, yes, also the other cores should stop, and we accept the performance hit (or, if it is exclusive only for the current process, then the other cores could still run threads of other processes).
Background
First of all, it is clear, that, at least in Java, this concept does not exist:
Atomic as in uninterruptible: once the block starts, it can't be interrupted, even by task switching. ...
[this] cannot be guaranteed in Java - it doesn't provide access to the "critical sections" primitives required for uninterruptibility.
However, it would have come in handy in the following case: a system sends a request and receives response A
. After receiving the response, it has max. 3 seconds to send request B
. Now, if multiple threads are running, doing this, then it can happen, that after receiving response A
, the thread is interrupted, and one or more threads run, before the original thread has the chance to send out request B
, and thus misses the 3 seconds deadline. The more threads are running, the bigger the risk that this happens. By marking the "receive A
to send B
" section "uninterruptible", this could be avoided.
Note, that locking this section would not solve the issue. (It would not prevent the JVM, from e.g. processing 10 new threads at the "send request A
" phase, right after our thread received response A
.)
EDIT: Re. global mutex. That would also not solve the issue. Basically, I want the threads to make Request A
's (and some other stuff) simultaneously, but I want them to stop, when another thread received Response A
, and is going to make Request B
.
Now, I know, that this would not be a 100% solution either, because those threads that don't get scheduled right after receiving response A
still could miss the deadline. But, at least, those who do, would for sure send out the second request in time.
Some further speculation
The classic concurrency problem a++
could be simply solved by uninterruptible { a++; }
, without the need for locks (which can cause dead-lock, and, in any case, would probably be more expensive in terms of performance, than simply executing the three instructions required by a++
, with a simple flag, that they must not be interrupted).
EDIT RE. CAS: of course, that's another solution too. However, it involves retrying, until the write succeeds, and it is also slightly more complex to use (at least in Java, we have to use AtomicXXX
, instead of the primitive types for that).
I know, of course, that this could be easily abused, by marking large blocks of code as uninterruptible, but that is true for many concurrency primitives as well. (What's more, I also know, that my original use case would be also kind of an "abuse", since I'd be doing I/O in an uninterruptible block, still it would have been worth at least a try, if such concept did exist in Java.)