I am writing a simple code in Python that returns the start index of a substring. I used two pointers in my original approach, but it took too long failed the test case.
def strStr(haystack, needle):
if needle == "":
return 0
for count in range(len(haystack)):
if len(needle) > len(haystack)-count:
break
if haystack[count] == needle[0]:
for count2 in range(len(needle)):
if needle[count2] != haystack[count+count2]:
break
if count2+1 == len(needle):
return count
return -1
I then found another solution online. This passes the test just fine:
def strStr(haystack, needle):
if needle == "":
return 0
for i, c in enumerate(haystack):
if len(haystack) < i + len(needle):
break
elif haystack[i:i+len(needle)] == needle:
return i
return -1
During my search, I saw another language (I think Java) passed the test just fine using the two pointers approach. Why does the first example take longer than the second in Python?