0

I'm trying to create a fine grain locking mechanism for the following scenario:

I have a data store with many serialised Cache objects inside it. Each Cache belongs to a certain person, group or company and each Cache can be modified in one of four ways: they can be created, deleted, removed from or inserted into. While a Cache is being modified I want to block access to it. Each Cache is identified using a CacheLocation object which stores the directory and file name as well as the full path for convenience.

Currently I am using an array list inside a class called RequestQueue which holds current CacheLocation objects being processed. Then when another thread comes in it checks the queue to see if the CacheLocation it is requesting is already being used. If this is the case a while loop is used to keep checking the CacheLocation periodically until the request that put it there removes it.

I was thinking it might be an idea to have a HashMap of CacheLocation keys against BlockingQueue values. This would result in a large set of BlockingQueue objects but I could manage the queue fairly well.

Is there a better way to do this sort of fine grain locking?

Alexei Blue
  • 1,762
  • 3
  • 21
  • 36
  • 1
    Dont use a while loop to wait for a resource. Two or more threads can have the condition to be true at the same time and break the serialised access. Instead, use syncronized or a lock – Evans May 13 '13 at 11:45
  • I tried this before but probably got it wrong. I was calling get on the RequestMap to see if the requests CacheLocation was in the Map (think I needed to use ConcurrentHashMap) and if get returned a location I was synchronising and calling wait on the object. The problem I had was the called to notify from the other thread caused an IllegalMonitorStateException. Not sure why because the other thread should have created the object but it requires a bit more looking into :) – Alexei Blue May 13 '13 at 12:01

2 Answers2

1

If I understand your description correctly, one way that would keep your design fairly simple would be to:

  • use a ConcurrentHashMap<CacheLocation, Cache> to store the Caches (I assume CacheLocations are immutable, or at least never mutated)
  • make sure you guard all accesses to your Cache with a lock on the related CacheLocation object
assylias
  • 321,522
  • 82
  • 660
  • 783
  • Yeah the CacheLocation object is immutable. I currently use a ConcurrentHashMap for storing currently loaded caches into memory rather than reading them from disc until all uses using a Cache have sent a finished request. So I could use the same technique for storing requests I guess :) I'll give it a try and let you know how I get one :) – Alexei Blue May 13 '13 at 11:47
  • Thanks for the help guys, managed to get it working nicely by creating a ConcurrentHashMap with CacheLocation as the key and a ReentrantLock as the value. When a change request comes in it checks the map, if an entry doesn't exist for that CacheLocation it creates a new Lock, acquires it by calling lock() and adds it to the map. If an entry exists it grabs the lock and calls lock() causing the thread to wait. Once the thread that has the lock releases it the next thread acquires it and so on. This keeps my design nice and neat and hopefully fast =) Thanks again guys – Alexei Blue May 13 '13 at 14:33
  • @AlexeiBlue That is one way - I was more thinking of using `synchronized(cacheLocation)` blocks to keep things simple. But the effect is the same. – assylias May 13 '13 at 14:35
0

There is also another non-blocking (but potentially slower) approach:

map.compute(someId, (key, value) -> {
  // atomic access to cache
  return null;
});

Read here my related question and the answer

Community
  • 1
  • 1
Karussell
  • 17,085
  • 16
  • 97
  • 197