I'd like to expand on some of these suggestions with two possible implementations. One not thread safe, and one that might be.
Here is a simplest version with a unit test that shows it works.
First the non-concurrent version:
import java.util.LinkedHashMap;
import java.util.Map;
public class LruSimpleCache<K, V> implements LruCache <K, V>{
Map<K, V> map = new LinkedHashMap ( );
public LruSimpleCache (final int limit) {
map = new LinkedHashMap <K, V> (16, 0.75f, true) {
@Override
protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
return super.size() > limit;
}
};
}
@Override
public void put ( K key, V value ) {
map.put ( key, value );
}
@Override
public V get ( K key ) {
return map.get(key);
}
//For testing only
@Override
public V getSilent ( K key ) {
V value = map.get ( key );
if (value!=null) {
map.remove ( key );
map.put(key, value);
}
return value;
}
@Override
public void remove ( K key ) {
map.remove ( key );
}
@Override
public int size () {
return map.size ();
}
public String toString() {
return map.toString ();
}
}
The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).
Here is the test that proves it works as an LRU cache:
public class LruSimpleTest {
@Test
public void test () {
LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );
cache.put ( 0, 0 );
cache.put ( 1, 1 );
cache.put ( 2, 2 );
cache.put ( 3, 3 );
boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
cache.put ( 4, 4 );
cache.put ( 5, 5 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == 4 || die ();
ok |= cache.getSilent ( 5 ) == 5 || die ();
cache.get ( 2 );
cache.get ( 3 );
cache.put ( 6, 6 );
cache.put ( 7, 7 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == null || die ();
ok |= cache.getSilent ( 5 ) == null || die ();
if ( !ok ) die ();
}
Now for the concurrent version...
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {
final CacheMap<K, V>[] cacheRegions;
private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
private final ReadWriteLock readWriteLock;
private final int limit;
CacheMap ( final int limit, boolean fair ) {
super ( 16, 0.75f, true );
this.limit = limit;
readWriteLock = new ReentrantReadWriteLock ( fair );
}
protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
return super.size () > limit;
}
@Override
public V put ( K key, V value ) {
readWriteLock.writeLock ().lock ();
V old;
try {
old = super.put ( key, value );
} finally {
readWriteLock.writeLock ().unlock ();
}
return old;
}
@Override
public V get ( Object key ) {
readWriteLock.writeLock ().lock ();
V value;
try {
value = super.get ( key );
} finally {
readWriteLock.writeLock ().unlock ();
}
return value;
}
@Override
public V remove ( Object key ) {
readWriteLock.writeLock ().lock ();
V value;
try {
value = super.remove ( key );
} finally {
readWriteLock.writeLock ().unlock ();
}
return value;
}
public V getSilent ( K key ) {
readWriteLock.writeLock ().lock ();
V value;
try {
value = this.get ( key );
if ( value != null ) {
this.remove ( key );
this.put ( key, value );
}
} finally {
readWriteLock.writeLock ().unlock ();
}
return value;
}
public int size () {
readWriteLock.readLock ().lock ();
int size = -1;
try {
size = super.size ();
} finally {
readWriteLock.readLock ().unlock ();
}
return size;
}
public String toString () {
readWriteLock.readLock ().lock ();
String str;
try {
str = super.toString ();
} finally {
readWriteLock.readLock ().unlock ();
}
return str;
}
}
public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
int cores = Runtime.getRuntime ().availableProcessors ();
int stripeSize = cores < 2 ? 4 : cores * 2;
cacheRegions = new CacheMap[ stripeSize ];
for ( int index = 0; index < cacheRegions.length; index++ ) {
cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
}
}
public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {
cacheRegions = new CacheMap[ concurrency ];
for ( int index = 0; index < cacheRegions.length; index++ ) {
cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
}
}
private int stripeIndex ( K key ) {
int hashCode = key.hashCode () * 31;
return hashCode % ( cacheRegions.length );
}
private CacheMap<K, V> map ( K key ) {
return cacheRegions[ stripeIndex ( key ) ];
}
@Override
public void put ( K key, V value ) {
map ( key ).put ( key, value );
}
@Override
public V get ( K key ) {
return map ( key ).get ( key );
}
//For testing only
@Override
public V getSilent ( K key ) {
return map ( key ).getSilent ( key );
}
@Override
public void remove ( K key ) {
map ( key ).remove ( key );
}
@Override
public int size () {
int size = 0;
for ( CacheMap<K, V> cache : cacheRegions ) {
size += cache.size ();
}
return size;
}
public String toString () {
StringBuilder builder = new StringBuilder ();
for ( CacheMap<K, V> cache : cacheRegions ) {
builder.append ( cache.toString () ).append ( '\n' );
}
return builder.toString ();
}
}
You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.