I implement hash table , where key represented by 26 alphabetic character and values represent by array correspond for each key, that array contain all words start with the corresponding character. So, to search for specific word in that hash table, i should search in keys to find the first character for that word, once it found search in the corresponding array to find the specific word. is that take O(n^2) as search in keys for specific character and search in corresponding array for specific word. or it take O(log (n))?
-
4I expect that would be `O(n)`, since hash lookup is `O(1)` then the list search is `O(n)` – xtratic Jun 18 '19 at 15:23
-
How about simply implement and profile it for different values of *n*. In a plotted graph ist is very easy to distinguish between O(n^2) and O(log n). – MrSmith42 Jun 18 '19 at 15:23
-
Why use a hash table is there are exactly 26 different keys you can use an array. – MrSmith42 Jun 18 '19 at 15:25
-
3A hash table for word search seems quite inefficient, especially if you have a lot of words that start with the same character. Wouldn't a [trie](https://en.wikipedia.org/wiki/Trie) be better suited? – Thomas Jun 18 '19 at 15:25
-
@Thomas: tries are horribly memory inefficient. – Jun 18 '19 at 15:28
-
1What does n denote ? – Jun 18 '19 at 15:31
-
Though your post is a little obscure, I don't think you are truly implementing a hash table. And how would you justify the options O(N²) and O(Log(N)) otherwise than being random guesses ? – Jun 18 '19 at 15:34
-
@YvesDaoust yes they are although that might not be a problem - it's the old story of time vs space ;) – Thomas Jun 18 '19 at 15:36
-
@Thomas: 208 bytes per character vs. 2~3 bytes per character for a hash table, and comparable speeds. – Jun 18 '19 at 18:24
-
@YvesDaoust I'm not sure where you get those numbers from but there are various ways to implement a hashmap and a trie whose efficiency depends on the data (e.g. if there are a lot of common prefixes or hash collisions) and the functional needs (which haven't been mentioned here). However, you are right in that we'd need to carefully consider which structure fits best - unless that's totally irrelevant and the OP implementing an own hashmap is just for learning purposes (I'd assume it's not for increased efficiency due to the overall feel of the question). – Thomas Jun 19 '19 at 07:27
-
@Thomas: my feeling is that the OP is trying to implement a home-made dictionary data structure and has little notion of a hash map. – Jun 19 '19 at 07:32
-
@YvesDaoust yes, exactly :) – Thomas Jun 19 '19 at 07:49
1 Answers
You didn't post any code, so I'm going to make some assumptions. You say you have a hashtable with 26 keys (known in advance), so I'll assume this is implemented optimally as an array of 26 elements, which gets O(1)
access by key into that array. Then you say you have arrays at each element, containing all of the elements that start with that letter. As hash functions go, it's fairly weak, but it certainly is a valid one.
So, when we want to search for a particular value, we need to look in the appropriate bin based on first letter (which takes O(1)
) and then search linearly through that bin (O(n)
). So overall, the complexity is O(n)
, assuming again that your toplevel hashtable data structure is implemented efficiently.
Now, since you say "first letter", I'll assume you're operating on strings, which gives a possibility for optimization. Strings have a nice property in that they can be ordered easily. So if you ensure that your bins are always sorted in lexicographic order, you can get O(log n)
lookup with a binary search rather than a linear one. Note that this change takes your insertion function from (amortized) O(1)
to O(n)
, so you should consider whether you'll be inserting or searching more often.

- 62,821
- 6
- 74
- 116