From a behavioral standpoint, this appears to be a partial replacement for Java's built-in synchronization (monitor locks). In particular, it appears to provide correct mutual exclusion which is what most people are after when they're using locks.
It also appears to provide the proper memory visibility semantics. The Atomic*
family of classes has similar memory semantics to volatile
, so releasing one of these "locks" will provide a happens-before relationship to another thread's acquisition of the "lock" which will provide the visibility guarantee that you want.
Where this differs from Java's synchronized
blocks is that it does not provide automatic unlocking in the case of exceptions. To get similar semantics with these locks, you'd have to wrap the locking and usage in a try-finally statement:
void a() {
while (!atomicBoolean.compareAndSet(false, true) { }
try {
// code to execute
} finally {
atomicBoolean.set(false);
}
}
(and similar for b
)
This construct does appear to provide similar behavior to Java's built-in monitor locks, but overall I have a feeling that this effort is misguided. From your comments on another answer it appears that you are interested in avoiding the OS overhead of blocking threads. There is certainly overhead when this occurs. However, Java's built-in locks have been heavily optimized, providing very inexpensive uncontended locking, biased locking, and adaptive spin-looping in the case of short-term contention. The last of these attempts to avoid OS-level blocking in many cases. By implementing your own locks, you give up these optimizations.
You should benchmark, of course. If your performance is suffering from OS-level blocking overhead, perhaps your locks are too coarse. Reducing the amount of locking, or splitting locks, might be a more fruitful way to reduce contention overhead than to try to implement your own locks.