At some point threads will contend for the monitor, at this point one thread should win, does Java use atomic CAS operations built into the CPU to achieve the acquisition of these monitors, if not how does this work?
-
3That's a VM implementation detail. – Jon Skeet Oct 08 '13 at 11:47
-
you don't need to implement that – Mostafa Jamareh Oct 08 '13 at 11:47
-
Why does it matter? synchronized delivers its concurrency guarantees using whatever platform-specific way that is needed to ensure its goals; in systems that doesn't support concurrency at all or when Java could prove that the method would never be accessed concurrently it could even be optimized into a noop. How exactly they're implemented is implementation detail that you wouldn't need to worry about, if your Java implementation doesn't deliver that guarantee then it would have to either be a bug or (rarely) a documented deviation from the standard. – Lie Ryan Oct 08 '13 at 11:50
-
Yes but i'm interested in how this works, reason being is that synchronized blocks are prefered over java CAS operations when contention is high, as CAS use more cpu time in high contention periods, however I believe Java synchronixed must use a CAS operation at the CPU level in order to correctly aquire locks, if this is true then this would create the same problem as using java CAS operations over Java syncrhonization. – newlogic Oct 08 '13 at 11:52
-
I believe this is an interesting when considering performance decisions. – newlogic Oct 08 '13 at 12:07
-
possible duplicate of [How synchronized keyword in java have been implemented?](http://stackoverflow.com/questions/12365127/how-synchronized-keyword-in-java-have-been-implemented) – Raedwald Oct 08 '13 at 13:31
-
@user1037729 It isn't interesting because the implementation changes between JVMs so a micro-tuning choice you make for one version might be completely wrong for the next version. e.g. Java 7 doesn't support Hardware Transaction Memory and Java 8/9 might, this would change its behaviour significantly. – Peter Lawrey Oct 08 '13 at 13:43
2 Answers
I don't think so since in the concurrent
packages you can find the Atomic*
classes which use CAS internally.
Another thing is that it depends on what kind of jvm you use. So in its current form your question is not really answerable apart from telling you that CAS is used elswhere.

- 29,285
- 22
- 112
- 207
CAS is what makes all concurrency work at the hardware level. If you want to change one value in memory across all threads, CAS is the fastest way to do it; any other technique is going to use CAS also. So for quick changes, CAS is the way to go. But if you have 100, or even 5 values to change, you're likely better off using synchronization. It'll do one CAS to lock the monitor and another to unlock it, but the rest is normal memory reads and writes which are much faster than CAS. Of course, you do have the monitor locked, which may hang other threads, slowing your program and possibly wasting CPU.
A bigger concern is that in Java any CAS (or reading/writing volatiles and synching/unsynching) is accompanied by bringing other threads' views of memory up-to-date. When you write a volatile, the threads that read it see all the memory changes made by writing the thread. This involves dumping register values to memory, flushing caches, updating caches, and putting data back into registers. But these costs parallel CAS, so if you've got one figured out, you've got the other figured out too.
The basic idea, I think, from the programmers' point of view, is to use volatile or atomic operations for single reads and writes and synchronization for multiples--if there's no other compelling reason to chose one over the other.

- 3,108
- 16
- 18