I am working with NLTK and I would like to find all sentences that include a given set of key words. For example, it is currently [x for x in tokenized_sent if 'key_word1' and 'key_word2' and 'key_word3' in x]
. I would like to set it up so that a user can input any number of words that can then be set equal to these key words separated by and
.
I have tried something like inserting user_input_list = ['key_word1','key_word2']
by writing [x for x in tokenized_sent if user_input_list[0] and user_input_list[1] in x]
which works but there has got to be a better way, especially a way to handle any given number of words to look for. Thanks.