I am trying to tokenise a list of strings to be a list of words. For example:
a=['NEWS FLASH: popcorn-flavored Tic-Tacs', 'The way']
I would like the output to be:
a=['NEWS', 'FLASH:', 'popcorn-flavored', 'Tic-Tacs', 'The', 'way']
I tried this code
from nltk.tokenize import word_tokenize
tokenized = [word_tokenize(i) for i in a]
but it returns a single list for each sentence
[['NEWS', 'FLASH', ':', 'popcorn-flavored', 'Tic-Tacs'], ['The', 'way']]