I have used LinkedHashMap
with accessOrder
true along with allowing a maximum of 500 entries at any time as the LRU cache for data. But due to scalability issues I want to move on to some thread-safe alternative. ConcurrentHashMap
seems good in that regard, but lacks the features of accessOrder
and removeEldestEntry(Map.Entry e)
found in LinkedHashMap
. Can anyone point to some link or help me to ease the implementation.

- 30,582
- 12
- 56
- 83

- 3,043
- 6
- 27
- 36
-
4This is a genuinely Hard problem. We've been trying to address it in google-collections and still aren't quite there. But we will be... – Kevin Bourrillion Dec 01 '09 at 22:20
-
Similar question: http://stackoverflow.com/questions/1391918/does-java-have-a-linkedconcurrenthashmap-data-structure – Vadzim Jan 22 '15 at 12:28
6 Answers
I did something similar recently with ConcurrentHashMap<String,CacheEntry>
, where CacheEntry wraps the actual item and adds cache eviction statistics: expiration time, insertion time (for FIFO/LIFO eviction), last used time (for LRU/MRU eviction), number of hits (for LFU/MFU eviction), etc. The actual eviction is synchronized and creates an ArrayList<CacheEntry>
and does a Collections.sort() on it using the appropriate Comparator for the eviction strategy. Since this is expensive, each eviction then lops off the bottom 5% of the CacheEntries. I'm sure performance tuning would help though.
In your case, since you're doing FIFO, you could keep a separate ConcurrentLinkedQueue. When you add an object to the ConcurrentHashMap, do a ConcurrentLinkedQueue.add() of that object. When you want to evict an entry, do a ConcurrentLinkedQueue.poll() to remove the oldest object, then remove it from the ConcurrentHashMap as well.
Update: Other possibilities in this area include a Java Collections synchronization wrapper and the Java 1.6 ConcurrentSkipListMap.

- 30,582
- 12
- 56
- 83
-
Your suggestion seems much like I need. It means I have some overhead of keeping other elements which will consume same size as that of set of the map. Anyhow this is fine for me. – DKSRathore Nov 29 '09 at 16:04
-
I know it's been a while but I found your answer and could you tell me when you got ArrayList
of evicted entries then how do you remove them from ConcurrentHashMap? Do you iterate over the Map.Entry checking if the values are the same and then removing it from Map using its key? Or do I miss something and you do it another way? – wawek Nov 29 '16 at 09:10
Have you tried using one of the many caching solutions like ehcache? You could try using LinkedHashMap with a ReadWriteLock. This would give you concurrent read access.

- 525,659
- 79
- 751
- 1,130
-
-
We've tried ReentrantReadWrite Locks with LinkedHashMap(maxSize, true), but you'll get ConcurrentModificationExceptions then (get's will modify the map structure) and the performance is far worse than with ConcurrentHashMap. – dag Sep 05 '19 at 10:04
This might seem old now, but at least just for my own history tracking, I'm going to add my solution here: I combined ConcurrentHashMap that maps K->subclass of WeakReference, ConcurrentLinkedQueue, and an interface that defines deserialization of the value objects based on K to run LRU caching correctly. The queue holds strong refs, and the GC will evict the values from memory when appropriate. Tracking the queue size involved AtomicInteger, as you can't really inspect the queue to determine when to evict. The cache will handle eviction from/adding to the queue, as well as map management. If the GC evicted the value from memory, the implementation of the deserialization interface will handle retrieving the value back. I also had another implementation that involved spooling to disk/re-reading what was spooled, but that was a lot slower than the solution I posted here, as Ihad to synchronize spooling/reading.

- 21
- 1
You mention wanting to solve scalability problems with a "thread-safe" alternative. "Thread safety" here means that the structure is tolerant of attempts at concurrent access, in that it won't suffer corruption by concurrent use without external synchronization. However, such tolerance does not necessarily help to improve "scalability". In the simplest -- though usually misguided -- approach, you'll try to synchronize your structure internally and still leave non-atomic check-then-act operations unsafe.
LRU caches require at least some awareness of the total structure. They need something like a count of the members or the size of the members to decide when to evict, and then they need to be able to coordinate the eviction with concurrent attempts to read, add, or remove elements. Trying to reduce the synchronization necessary for concurrent access to the "main" structure fights against your eviction mechanism, and forces your eviction policy to be less precise in its guarantees.
The currently accepted answer mentions "when you want to evict an entry". Therein lies the rub. How do you know when you want to evict an entry? Which other operations do you need to pause in order to make this decision?

- 14,999
- 2
- 48
- 58
The moment you use another data structure along with concurrenthashmap, the atomicity of the operations sucha adding a new item in concurrenthashmap and adding in other data structure cant be guaranteed without additional synchronization such as ReadWriteLock which will degrade performance

- 1
- 1
Wrap the map in a Collections.synchronizedMap()
. If you need to call additional methods, then synchronize
on the map that you got back from this call, and invoke the original method on the original map (see the javadocs for an example). The same applies when you iterate over the keys, etc.

- 321,842
- 108
- 597
- 820
-
1Aaron this will worsen my scalability issues. Since the update of the map would become serial in nature. ConcurrentHashMap has feature so that more than one can update the map at the same time. – DKSRathore Nov 29 '09 at 14:32
-
ConcHM has some specific properties which don't map easily to LinkedHM (namely it allows to insert concurrently as long as the hash doesn't map to the same segment key). This optimization is not possible when you need to keep the insert order. You must decide which one is more important to you. – Aaron Digulla Nov 29 '09 at 15:04