I'm searching for the best algorithm to resolve this problem: having a list (or a dict, a set) of small sentences, find the all occurrences of this sentences in a bigger text. The sentences in the list (or dict, or set) are about 600k but formed, on average, by 3 words. The text is, on average, 25 words long. I've just formatted the text (deleting punctuation, all lowercase and go on like this).
Here is what I have tried out (Python):
to_find_sentences = [
'bla bla',
'have a tea',
'hy i m luca',
'i love android',
'i love ios',
.....
]
text = 'i love android and i think i will have a tea with john'
def find_sentence(to_find_sentences, text):
text = text.split()
res = []
w = len(text)
for i in range(w):
for j in range(i+1,w+1):
tmp = ' '.join(descr[i:j])
if tmp in to_find_sentences:
res.add(tmp)
return res
print find_sentence(to_find_sentence, text)
Out:
['i love android', 'have a tea']
In my case I've used a set to speed up the in
operation