0

I'm looking at some legacy code that's of the form

public class Client {
  private final Cache cache; 
  .....
   Client( final Cache cache) {
    this.cache = cache;
   }

  public Value get(Key key) {
     synchronized(cache){
       return this.cache.get(key);
     }
    }
   public void put(Key k , Value v) {
        synchronized(this.cache){
            return cache.put(k, v);
        }
    }
   }
}

I've never seen an instance variable that could be modified being used as a lock Object, since typically locks are usually final Object instances or just direct Locks via the Java API.

  1. How does the synchronized key word have any effect in this case? Isnt a new lock created for each instance of the Client object?
  2. Would the usage of the synchronized keyword force the cache to be updated before a get/put operation is applied?
  3. Why would synchronization be necessary before a get? Is it to get the cache to be updated to the latest values assuming another thread applied a put interim.
seeker
  • 6,841
  • 24
  • 64
  • 100
  • 1. Yes. 2. No. 3. To prevent simultaneous modifications. – user207421 Mar 07 '19 at 00:13
  • @user207421 The answer to 1 is no. And 3 is also necessary for visibility. – shmosel Mar 07 '19 at 00:14
  • @shmosel, can you expand on the visibilty aspect? I get the concurrent modification aspect but not the visibility part – seeker Mar 07 '19 at 00:20
  • @seeker Declaring a field as final doesn't make it unmodifiable. It just means the *reference* can't be changed. As for your other questions, I think you're confused as to what `synchronized` does. Synchronization happens at the object level; it doesn't matter what instance of `Client` you're in, so long as you're synchronizing on the same `Cache`. The (primary) purpose of synchronization is to restrict concurrent access to a shared resource. By synchronizing around access to `cache`, no thread can read or write to it while another thread is reading or writing to it. – shmosel Mar 07 '19 at 00:21
  • 1
    @user207421 If one thread writes to a field and another thread subsequently reads it, the second thread may not see the latest value (or it could see it in an incomplete state), unless the read and write are synchronized. – shmosel Mar 07 '19 at 00:25
  • @shmosel, thanks for the reply. While I get that the purpose of synchronized is to prevent multiple threads from accessing a critical section, what I dont understand is, how the lock would be shared amongst different instances of Client, based on what you are saying, there could potentially be 10 Client instances but they all share the same lock? If yes, which instance's `cache` do they lock on? And are the updates made by different threads to `cache` visible to other threads? – seeker Mar 07 '19 at 00:32
  • 1
    You're not synchronizing on the `cache` variable; you're synchronizing on the *instance*. If 10 `Client`s all reference the same `Cache` instance, they will all share the same lock and their updates will be visible to each other. – shmosel Mar 07 '19 at 00:35
  • Not sure I follow though, maybe OP did, in this example isnt `cache` an instance variable? ie. tied to an instance of Client. And based on the way the constructor is wired up wouldnt each instance create its own version of cache? – userNotFound Mar 07 '19 at 01:05
  • @userNotFound Depends what you mean by "version". – shmosel Mar 07 '19 at 01:13
  • By version I mean its own copy? If it did, how would it synchronize on a single value of cache since, wouldnt each object have its own copy? – userNotFound Mar 07 '19 at 01:45
  • Right, this code **does not** synchronize on a single cache. It does synchronize on each individual copy of the cache. That could be good or bad, depending on what the programmer intended. @userNotFound – markspace Mar 07 '19 at 02:06
  • Also, since multi-threading is involved here, it's important to note that `final` does not just make the reference unmodifiable. `final` also has memory visibility semantics and make the initial value of the entire cache object visible, much like assignment to a `volatile`. See [Safe Publication](https://stackoverflow.com/questions/801993/java-multi-threading-safe-publication) – markspace Mar 07 '19 at 02:09
  • 1
    @userNotFound "wouldnt each instance create its own version of cache?". The Client does not create *any* cache. It gets handed an existing Cache through its constructor argument. Thus, whoever creates the Clients determines the degree of Cache sharing going on. –  Mar 07 '19 at 02:26
  • @markspace Why are you convinced there's more than one cache instance? – shmosel Mar 07 '19 at 07:41
  • @shmosel Because it's an instance field. – markspace Mar 07 '19 at 14:31
  • @markspace So? They could all point to a shared instance. – shmosel Mar 07 '19 at 17:03
  • @shmosel, Still dont get what you mean by shared instance. Wouldnt the Client objects have all their *own*copies of cache? Since the only way to initialize a client is by passing in a reference to a cache, unless you mean that they *could* pass in the same reference? – userNotFound Mar 07 '19 at 22:48
  • @userNotFound They could receive the same instance, and likely do. Otherwise I would expect each client to instantiate its own cache. – shmosel Mar 07 '19 at 23:10

1 Answers1

0

synchronized provides the same guarantees irrespective of whether it is used on a static variable or an instance variable. i.e., memory visibility and atomicity. In your case, it provides thread safety at instance level for the attribute cache.

So, coming to your questions

  1. You are right. Each instance of Clientwill have its own lock. But this is useful when an instance of Client is shared between multiple clients.

  2. After the execution of the synchronized block, CPU local caches will be flushed into the main memory and this ensures memory visibility for other threads. At the start of execution of the synchronized block, local CPU caches will be invalidated and loaded from the main memory. So yes, synchronized will cause the instance variable cache to have the up to date values. Please see Synchronization for more details.

  3. The reason is the same as 2. i.e., to provide memory visibility.

samzz
  • 117
  • 1
  • 3