-1

I need to get the cache complexity O(log (n)). It is said that this complexity allows for map and list. For example, the implementation of:

http://blackcat.ca/svn/trunk/lru_cache/src/lru_cache.h

But in this algorithm are present such operations with the list: List.splice() - Complexity O(n). List.erase() - Complexity O(n).

Here people say that map and list will the complexity of O(log(n)). https://stackoverflow.com/a/3640392/3274299

Why O(log(n))? There must be O(n).

Community
  • 1
  • 1
Tipok
  • 675
  • 8
  • 26

1 Answers1

2

"But in this algorithm are present such operations with the list: List.splice() - Complexity O(n). List.erase() - Complexity O(n)."

Nope... where'd you get those complexity values from? Remember that you're not searching the list using the key (which would be O(n)) - rather, you're using the map to find the position in the list to operate on.

The lru_cache.h code you posted is only using overload (2) in the cppreference list::splice overloads mentioned here, which - as documented - has constant complexity.

Similarly, the erase use is overload (1) here with constant complexity.

Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
  • Thanks. I don't know it. :) – Tipok Jan 22 '15 at 10:53
  • So `erase` is Big Oh of 1 but however the overall implementation cannot run in constant time. Can you please provide a proof. I am talking about run time complexity not space time. – Mohamed Bana Jun 26 '23 at 05:13
  • 1
    @MohamedBana "however the overall implementation cannot run in constant time" - I didn't say that, and it's not true, so I can't provide a proof. You might want to read the link from the question - https://stackoverflow.com/a/3640392/3274299 . HTH, Tony – Tony Delroy Jun 26 '23 at 06:17
  • "This is the fastest you can get, If you are using a hash_map you should almost have all the operations done in O(1). If using std::map it should take O(logn) in all cases." They key being _should almost_, that's not a guarantee. – Mohamed Bana Jun 27 '23 at 12:14
  • @MohamedBana: I'm not sure if you understand hash table properties and just feel like arguing to make sure largely irrelevant caveats are spelt out, or whether you don't understand this stuff and want an explanation. Anyway,for for hash tables with cryptographic strength hash functions you can prove the *statistical* distribution of collision chain lengths, and therefore the amortised performance properties. The "should/almost" is probably a nod to worst-case behaviour - only seen when deliberately orchestrated using hostile key inputs or with weaker hash functions. – Tony Delroy Jun 28 '23 at 07:57
  • OK. Points taken. But please provide proofs. – Mohamed Bana Jun 29 '23 at 11:00
  • @MohamedBana: sorry, I'm kind of busy and don't feel that's what I want to be doing with my spare time, when you can easily google a proof (perhaps [this](https://www.coursera.org/lecture/algorithms-graphs-data-structures/universal-hashing-analysis-of-chaining-advanced-optional-YhHRN)).. Further, I answered a question about list operations, and you're side-tacking into hash table operations. Ask a new question and see who wants to get involved. – Tony Delroy Jun 29 '23 at 11:24