I have found some non-English words in my dictionary (from CountVectorizer) that I would like to remove:
verified={'日本': '19 日本',
'له': 'إستعداد له',
'لسنا': 'القادم لسنا',
'غيتس': 'بيل غيتس',
'على': 'على إستعداد',
'بيل': 'بيل غيتس',
'الوباء': 'الوباء القادم',
'إستعداد': 'إستعداد له',
'és': 'koronavírus és',
'állnak': 'kik állnak',
'zu': 'könig zu',
'zero': 'agenda zero'}
My attempt was to use nltk, specifically words
:
import nltk
words = set(nltk.corpus.words.words())
not_en_list = [x for x, v in verified.items() if v!='[]' if x not in words]
But when I ran it, no changes were applied. Still non-English words there. Please note that the example I provided is a sample of data: I have thousands of English words, but a few of non-English words that I would like to delete, without copying and pasting the list.