Is there a technique such that I can specify a number n such that when the (n + 1)th entry is inserted, the oldest entry is removed first ensuring that the size of the hashtable is always limited to n?
-
This is similar to http://stackoverflow.com/questions/272674/what-is-a-data-structure-kind-of-like-a-hash-table-but-infrequently-used-keys-ar. The question provides a solution. – Rob Hruska Jan 07 '09 at 21:36
-
@robhruska - do you think this counts as a duplicate? I'm on the fence. – Paul Tomblin Jan 07 '09 at 21:48
-
I'm not sure. The perspectives are a bit different, since this question is asking about a "size limit" while the other question targets "infrequently used" entries. I'm not opposed to leaving it open, if just so that there's more searchability for those looking at it from this question's angle. – Rob Hruska Jan 07 '09 at 22:15
-
http://letmegooglethatforyou.com/?q=data+structures – yfeldblum Jan 07 '09 at 22:15
6 Answers
LinkedHashMap does exactly that, see javadoc for the removeEldestEntry method.
Something like this should do the trick, this will remove the oldest inserted entry:
Map map = new LinkedHashMap() {
@Override
protected boolean removeEldestEntry(Entry eldest) {
return size() > N;
}
};
You can also remove oldest accessed entry by specifying it in the constructor:
Map map = new LinkedHashMap(16, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Entry eldest) {
return size() > N;
}
};

- 6,443
- 6
- 27
- 29
You're looking for a LRU cache perhaps? Here's a blog post on one based on LinkedHashMap.

- 60,628
- 22
- 121
- 123
-
I came here to mention LinkedHashMap, which I used for a LRU cache recently. – Paul Tomblin Jan 07 '09 at 21:36
If you have concurrency needs, don't try to solve this problem yourself. Guava's CacheBuilder has a .maximumSize() method that allows you to limit the size of a map, although it's my understanding that old entries may be cleaned out before you actually reach the limit.
There's an interesting page on the data structure's design, which should impress on the reader how difficult it would be to do any better than Google's implementation. :)

- 6,405
- 2
- 31
- 38
You may want to consider using Apache Collections. They have a bunch of LRU implementations. Otherwise, you can easily write a similar wrapper for the standard library collections; I don't think there's one you can use directly.

- 88,451
- 51
- 221
- 321
If you're caching you could use a WeakHashMap or WeakReference and then not have to worry about the size of the cache.

- 61,876
- 75
- 195
- 257
-
2Just to clarify to avoid propagating a common misconception: WeakHashMap is NOT suitable for LRU caching by itself; it uses WeakReferences for KEYS, not VALUES. It's ideal for keeping metadata about objects which don't 'belong' to you; not for assuming entries will be thrown out when memory runs out – Cowan Jan 08 '09 at 01:10