0

I was solving a problem on LeetCode called "Longest Substring Without Repeating Characters" and I know that
- insertion and searching in Hash Map is O(1), for n times is O(n)
- insertion and searching in Map is O(log n), for n times O(nlog(n))

but when I was using hash map (unorderd_map) in cpp, the run time was 140ms
and when using map in cpp, the run time was 40ms

code below using hash map runs in 140ms:

class Solution {
public:
    int lengthOfLongestSubstring(string s) {
        
        int start=0, mx=0;
        unordered_map<char, int>um;
        
        for(int i=0; i<s.size(); i++)
        {
            if(um.count(s[i]) && start<=um[s[i]])//exists
                start=um[s[i]]+1;
            else 
                mx=max(mx, i-start+1);
                
            um[s[i]]=i;
            cout<<mx<<"\n";
        }
        return mx;
    }
};

and the same code with replacing hash map "unordered_map" with map runs in 40ms

so Why did this happen and not the opposite?

  • 5
    Because "Big O" notation describes **scalability**, not **speed**. – Drew Dormann Jan 27 '21 at 13:46
  • 2
    You can imagine, for example, a situation when `O(1)` takes an hour and `O(2^n)` is milliseconds, but with a growing of `n`, the first will remain constant, and the second will grow exponentially. – alex_noname Jan 27 '21 at 13:57
  • 1
    To make it faster use `std::array{}` instead maps then `um[(unsigned char)s[i]]` to avoid surprises with sign conversion (but for ASCII this is not needed). – Marek R Jan 27 '21 at 13:58
  • There is so much missing from your question. 1) What compiler and compiler options did you use to build your program? If you're running an unoptimized or "debug" build, the timings you're claiming are meaningless. 2) Where is the test data? How many items did you test? Did you test with thousands, maybe million length (random) strings, or just a few short strings? If it's just a few short strings, again, that is not a good test. 3) Where is the actual program? You have a class, but we have no idea of what your test program actually looks like. – PaulMcKenzie Jan 27 '21 at 13:58
  • @PaulMcKenzie IMO "LeetCode" is self explanatory. – Marek R Jan 27 '21 at 14:00
  • Then I suggest the @OP get a compiler, take the code that is posted, create a real app that creates and calls `Solution` with random data, and test for themselves. Relying on the mysterious man behind the curtain (Leetcode) is not a way to really learn what's going on. – PaulMcKenzie Jan 27 '21 at 14:02
  • @DrewDormann I agree with you, but I think from scalability, we can get proportion for the speed. –  Jan 27 '21 at 14:20
  • @AbanoubAsaad then I would consider [this question](https://stackoverflow.com/questions/487258/) to be a duplicate. I will let others decide if they agree. – Drew Dormann Jan 27 '21 at 14:33
  • 1
    @AbanoubAsaad -- *but I think from scalability, we can get proportion for the speed* -- No, not true. If you want a simple example, a bubble sort of 10 random elements will probably be faster than a quick-sort of 10 random elements. Does that mean that bubble sort is faster, in terms of overall scalability than quicksort if the number of elements was increased to 1000 or a million elements? – PaulMcKenzie Jan 27 '21 at 16:46

0 Answers0