2

Consider two methods a() and b() that cannot be executed at the same time. The synchronized key word can be used to achieve this as below. Can I achieve the same effect using AtomicBoolean as per the code below this?

final class SynchonizedAB {

synchronized void a(){
   // code to execute
}

synchronized void b(){
  // code to execute
}

}

Attempt to achieve the same affect as above using AtomicBoolean:

final class AtomicAB {

private AtomicBoolean atomicBoolean = new AtomicBoolean();

void a(){
   while(!atomicBoolean.compareAndSet(false,true){

  }
  // code to execute
  atomicBoolean.set(false);
}

void b(){
    while(!atomicBoolean.compareAndSet(false,true){

   }
     // code to execute
     atomicBoolean.set(false);
    }

 }
newlogic
  • 807
  • 8
  • 25

3 Answers3

3

No, since synchronized will block, while with the AtomicBoolean you'll be busy-waiting.

Both will ensure that only a single thread will get to execute the block at a time, but do you want to have your CPU spinning on the while block?

Kayaman
  • 72,141
  • 5
  • 83
  • 121
  • Yes I'd rather spin to reduce the cost of OS calls to stop/start threads as in the case i'm using this for is one where the executed code is fast hence it wont be spinning for long. – newlogic Jul 30 '14 at 14:12
  • 2
    @aranhakki The threads aren't stopped/started, the threads will be blocked. There's a significant difference. – Kayaman Jul 30 '14 at 14:13
  • Ok that's interesting, could you explain the difference? I guess you the call doesn't get down to the OS, so the cost is a JVM call rather than an OS call? – newlogic Jul 30 '14 at 14:15
  • 1
    Stopping and starting a thread would mean destroying and creating a new thread, whereas a blocking thread would just make the existing thread wait until it is awoken and scheduled to run again. You should be careful about assuming any performance benefits from having a spinlock. Some java.util.concurrent classes do use that mechanism, so there is a time and place for it, but whether it's the best solution here is uncertain. At least performance test it properly. – Kayaman Jul 30 '14 at 14:21
  • You will have to weigh the cost of context-switching in approach 1 (synchronized) vs the cost of unnecessary spinning in approach 2(CAS). With CAS, the thread loses the opportunity to do something meaningful and instead spins in the while loop. – Mahesh Oct 18 '16 at 08:04
1

It depends on what you are planning to achieve with original synchronized version of the code. If synchronized was added in original code just to ensure only one thread will be present at a time in either a or b method then to me both version of the code looks similar.

However there are few differences as mentioned by Kayaman. Also to add more diffs, with synchronized block you will get memory barrier which you will miss with Atomic CAS loops. But if the body of the method doesn't need such barrier then that difference gets eliminated too.

Whether Atomic cas loop performs better over synchronized block or not in indivisual case that only performance test can tell but this is the same technique being followed at multiple places in concurrent package to avoid synchronization at block level.

1

From a behavioral standpoint, this appears to be a partial replacement for Java's built-in synchronization (monitor locks). In particular, it appears to provide correct mutual exclusion which is what most people are after when they're using locks.

It also appears to provide the proper memory visibility semantics. The Atomic* family of classes has similar memory semantics to volatile, so releasing one of these "locks" will provide a happens-before relationship to another thread's acquisition of the "lock" which will provide the visibility guarantee that you want.

Where this differs from Java's synchronized blocks is that it does not provide automatic unlocking in the case of exceptions. To get similar semantics with these locks, you'd have to wrap the locking and usage in a try-finally statement:

void a() {
    while (!atomicBoolean.compareAndSet(false, true) { }
    try {
        // code to execute
    } finally {
        atomicBoolean.set(false);
    }
}

(and similar for b)

This construct does appear to provide similar behavior to Java's built-in monitor locks, but overall I have a feeling that this effort is misguided. From your comments on another answer it appears that you are interested in avoiding the OS overhead of blocking threads. There is certainly overhead when this occurs. However, Java's built-in locks have been heavily optimized, providing very inexpensive uncontended locking, biased locking, and adaptive spin-looping in the case of short-term contention. The last of these attempts to avoid OS-level blocking in many cases. By implementing your own locks, you give up these optimizations.

You should benchmark, of course. If your performance is suffering from OS-level blocking overhead, perhaps your locks are too coarse. Reducing the amount of locking, or splitting locks, might be a more fruitful way to reduce contention overhead than to try to implement your own locks.

Community
  • 1
  • 1
Stuart Marks
  • 127,867
  • 37
  • 205
  • 259
  • Will not the almost infinite while loop incur CPU or other overhead? – Jus12 Aug 23 '16 at 19:04
  • @Jus12 Yes, this technique of "spin-waiting" or "spin-locking" potentially incurs some overhead. But blocking a thread, scheduling it to be notified, and then waking it up again also incurs overhead as well as latency. The gamble here is that the number of times through a CAS loop is likely to be small, so the overhead is small compared to blocking and awakening. – Stuart Marks Aug 24 '16 at 21:42