I need to tokenize a sentence without using regex nor any imported module, but with the built-in split() function. The function should take a text as input and returns a list that contains the sentences in the text, delimited by '?', '!' and '.' An example would be:
>>> t = "Are you out of your mind? I can't believe it! I'm so disappointed."
>>> get_sentences(t)
['Are you out of your mind', 'I can't believe it', 'I'm so disappointed']
Here is my work so far:
def get_sentences(text):
l1 = text.split('.')
for l2 in l1:
l2 = l2.split('!')
for l3 in l2:
l3 = l3.split('?')
return l1
Any help, please?