1

Would like to take a list of comments from a data frame, first parse into a list of sentences, then on a second pass, parse by word. Need this for input to word2vec model, genism.

Have already used sent_tokenize from nltk to tokenize once, but then if I try to word_tokenize after that , get have an issue because it is no longer a string and expecting a string or byte like object.

import nltk

print(df)

ID Comment
0   Today is a good day.
1   Today I went by the river. The river also flow...
2   The water by the river is blue, it also feels ...
3   Today is the last day of spring; what to do to...

df['sentences']=df['Comment'].dropna().apply(nltk.sent_tokenize)

df['word']=df['sentences'].dropna().apply(nltk.word_tokenize)

after trying to pass sentences into words TypeError: expected string or bytes-like object

cs95
  • 379,657
  • 97
  • 704
  • 746
  • This post on [NLTK-based text processing with pandas](https://stackoverflow.com/questions/48049087/nltk-based-text-processing-with-pandas/48049425?r=SearchResults&s=2|47.7221#48049425) is mostly what you're looking for. – cs95 May 31 '19 at 04:45

1 Answers1

0
I guess the problem is as you have none null values so you can try

df['word']=df['sentences'].apply(nltk.word_tokenize)
AnkushRasgon
  • 782
  • 1
  • 6
  • 14