15

Sample code:

class Sample{
    private int v;
    public void setV(){
        Lock a=new Lock();
        a.lock();
        try{
            v=1;
        }finally{
            a.unlock();
        }
    }
    public int getV(){
        return v;
    }
}

If I have a thread constantly invoke getV and I just do setV once in another thread, Is that reading thread guaranteed to see the new value right after writing? Or do I need to make "V" volatile or AtomicReference?

If the answer is no, then should I change it into:

class Sample{
    private int v;
    private Lock a=new Lock();
    public void setV(){
        a.lock();
        try{
            v=1;
        }finally{
            a.unlock();
        }
    }
    public int getV(){
        a.lock();
        try{
            int r=v;
        }finally{
            a.unlock();
        }
        return r;
    }
}
Temple Wing
  • 433
  • 4
  • 11

5 Answers5

10

From the documentation:

All Lock implementations must enforce the same memory synchronization semantics as provided by the built-in monitor lock:

  • A successful lock operation acts like a successful monitorEnter action
  • A successful unlock operation acts like a successful monitorExit action

If you use Lock in both threads (i.e. the reading and the writing ones), the reading thread will see the new value, because monitorEnter flushes the cache. Otherwise, you need to declare the variable volatile to force a read from memory in the reading thread.

Community
  • 1
  • 1
Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
  • Can I ask a little more? Can you explain a little more about "flushes the cache". Does that mean no matter which thread enters a monitor, all processors will flush their cache? Or just the cache data related to the entered monitor will be flushed? – Temple Wing Sep 14 '12 at 19:03
  • @TempleWing My understanding is that JVM flushes the cache before trying to enter a monitor, so yes, all contending threads will flush their caches to memory, and then enter the lock one-by-one. Of course if a CPU does not run a thread that tries to enter a lock, that CPUs cache would not be flushed. – Sergey Kalinichenko Sep 14 '12 at 19:10
  • I believe that the JVM does not flush the cache. Instead, it uses memory barriers to provide visibility. https://mechanical-sympathy.blogspot.hk/2013/02/cpu-cache-flushing-fallacy.html – cozos Dec 27 '18 at 11:48
  • 1
    @cozos neither is a correct description. A JVM simply has to take whatever measure is required, to ensure the memory visibility. So if optimized execution acts as-if there’s a thread local cache, the lock/unlock operations must act as-if that cache is flushed. Most notably, optimizers may eliminate memory access completely, rather than operate on caches. E.g., a conditional branch may get eliminated when the condition is predictable, which behaves as-if the values of that condition are cached, whereas in reality, the evaluation is not happening at all. The (un)lock restricts JVM optimizations – Holger Mar 12 '19 at 09:28
1

As per Brian's Law...

If you are writing a variable that might next be read by another thread, or reading a variable that might have last been written by another thread, you must use synchronization, and further, both the reader and the writer must synchronize using the same monitor lock.

So it would be appropriate to synchronize both the setter and getter......

Or

Use AtomicInteger.incrementAndGet() instead if you want to avoid the lock-unlock block (ie. synchronized block)

Paul Bellora
  • 54,340
  • 18
  • 130
  • 181
Kumar Vivek Mitra
  • 33,294
  • 6
  • 48
  • 75
1

If I have a thread constantly invoke getV and I just do setV once in another thread, Is that reading thread guaranteed to see the new value right after writing?

NO, the reading thread may just read its own copy (cached automatically by the CPU Core which the reading thread is running on) of V's value, and thus not get the latest value.

Or do I need to make "V" volatile or AtomicReference?

YES, they both works.

Making V volatile simply stop CPU Core from caching V's value, i.e. every read/write operation to variable V must access the main memory, which is slower (about 100x times slower than read from L1 Cache, see interaction_latency for details)

Using V = new AtomicInteger() works because AtomicInteger use a private volatile int value; internally to provide visiblity.

And, it also works if you use lock (Lock object, synchronized block or method; they all works) on reading and writing thread (as your second code segment does), because (according to the Second Edition of The Java ® Virtual Machine Specification section 8.9)

...Locking any lock conceptually flushes all variables from a thread's working memory, and unlocking any lock forces the writing out to main memory of all variables that the thread has assigned...

...If a thread uses a particular shared variable only after locking a particular lock and before the corresponding unlocking of that same lock, then the thread will read the shared value of that variable from main memory after the lock operation, if necessary, and will copy back to main memory the value most recently assigned to that variable before the unlock operation. This, in conjunction with the mutual exclusion rules for locks, suffices to guarantee that values are correctly transmitted from one thread to another through shared variables...

P.S. the AtomicXXX classes also provide CAS (Compare And Swap) operations which is useful for mutlthread access.

P.P.S. The jvm specification on this topic has not changed since Java 6, so they are not included in jvm specification for java 7, 8, and 9.

P.P.P.S. According to this article, CPU caches are always coherent, whatever from each core's view. The situation in your question is caused by the 'Memory Ordering Buffers', in which the store & load instructions (which are used to write and read data from memory, accordingly) could be re-ordered for performance. In detail, the buffer allows a load instruction to get ahead of an older store instruction, which exactly cause the problem (getV() is put ahead so it read the value before you change it in the other thread). However, in my opinion, this is more difficult to understand, so "cache for different core" (as JVM specs did) could be a better conceptual model.

  • 1
    check this question for further discussion: https://stackoverflow.com/questions/1850270/memory-effects-of-synchronization-in-java – Song JingHe Dec 21 '17 at 14:35
0

You should make volatile or an AtomicInteger. That will insure that the reading thread will eventually see the change, and close enough to "right after" for most purposes. And technically you don't need the Lock for a simple atomic update like this. Take a close look at AtomicInteger's API. set(), compareAndSet(), etc... will all set the value to be visible by reading threads atomically.

ɲeuroburɳ
  • 6,990
  • 3
  • 24
  • 22
0

Explicit locks, synchronized, atomic reference and volatile, all provide memory visibility. Lock and synchronized do so for code block they surround and atomic reference and volatile to the particular variable declared so. However for the visibility to work correctly, both reading and writing methods should be protected by the same locking object.

It will not work in your case because your getter method is not projected by the lock which protects the setter method. If you make the change it will work as required. Also just declaring the variable as volatile or AtomicInteger or AtomicReference<Integer> will work too.

Abhinav Sarkar
  • 23,534
  • 11
  • 81
  • 97