Preparing for interviews and I came across something that is making me question my understanding of big O constant time algorithms.
A question on LeetCode asks to create a solution to the LFU cache problem.
There are 3 methods to implement:
Constructor - Input: int Capacity
get - Input: int key - output: int
set - input: int key, int value
The capacity represents the maximum number of cached key/value pairs you can hold at one time. When capacity is breached, you have to eject the least frequently accessed pair.
So obviously, you have to keep track of the number of times each item is accessed.
At the bottom of the question, it asks if you can both get and set in O(1) time.
I am looking at multiple proposed implementations of this, with long lists of people agreeing they are examples of constant time implementations, but to me they look like O(N).
For a full example, see here: https://leetcode.com/problems/lfu-cache/discuss/94515/Java-O(1)-Accept-Solution-Using-HashMap-DoubleLinkedList-and-LinkedHashSet?page=1
But the relevant methods from this example are in the following code block. Note that valueHash and nodeHash are both java hash tables.
public int get(int key) {
if (valueHash.containsKey(key)) {
increaseCount(key);
return valueHash.get(key);
}
return -1;
}
public void set(int key, int value) {
if ( cap == 0 ) return;
if (valueHash.containsKey(key)) {
valueHash.put(key, value);
} else {
if (valueHash.size() < cap) {
valueHash.put(key, value);
} else {
removeOld();
valueHash.put(key, value);
}
addToHead(key);
}
increaseCount(key);
}
The methods call Hashtable.containsKey and Hashtable.get, which implement linear search. So worst case time complexity is O(N) right? What am I missing?