... it means that during this atomic operation (executed by threadX), no other threadY is allowed to access (for either read or write) that same variable i2, hence, no other threadY, is allowed to access that same variable during the atomic operation, so some form of blocking does exist.
No you didn't get it right.
Atomic operations mean that threads can not see values in a partial state. The assignment is atomic depending on the underlying architecture running your JVM and the data size of i1
and i2
. I believe that Java says that int
fields are atomically assigned but long
(and double
) may not be because it may take multiple operations by the CPU.
Atomic actions cannot be interleaved, so they can be used without fear of thread interference.
This is right. If i1
is 1 and i2
is 2 and threadX
executes the assignment, then any other thread will either see the value of i1
as 1 (the old value) or 2 (the new value). ThreadY
won't see it be some sort of half-way between 1 or 2 because that assignment is atomic even if multiple threads are updating the value of i1
.
But what is really confusing the matter is that there are two concepts going on here: atomicity and memory synchronization. With threads, each CPU has its own memory cache so that memory operations are first made to the local memory and then these changes are written to main memory. A thread might see an old copy of i1
in its local cached memory even though another thread has updated main memory already. Even worse is when two threads have updated the value of i1
in their local memory and depending on their order of operations (which is highly random) one thread's value will overwrite the other thread's write to main memory. It's extremely hard to know which one will win the race condition.
As a note - atomicity does not mean "all other threads will be blocked until the value is ready.
Right. This is trying to let you know that there is no locking involved at all here. There are no guarantees as to the value that ThreadY
will see. ThreadY
could also be updating i1
at the same exact time to the value 3 and then other threads could see it as 1, 2, or 3 depending on the order of operations and whether or not those threads cross memory barriers when the cache flushing and updating is enforced.
The way we control fields and objects that are shared between threads is with the synchronized
keyword, which gives a thread unique access to a resource. There are also Lock
s and other mechanisms to provide mutex. We also can force memory barriers by adding a volatile
keyword to a field which means that any read or write to the field will be made to main memory. Both synchronized
and volatile
ensure proper publishing of data and ordering of operations.