I have a dataframe as:-
Filtered_data
['defence possessed russia china','factors driving china modernise']
['force bolster pentagon','strike capabilities pentagon congress detailing china']
[missiles warheads', 'deterrent face continued advances']
......
......
I just want to split each list elements into sub-elements(tokenized words).So, output Im looking for as:-
Filtered_data
[defence, possessed,russia,factors,driving,china,modernise]
[force,bolster,strike,capabilities,pentagon,congress,detailing,china]
[missiles,warheads, deterrent,face,continued,advances]
here is my code what I have tried
for text in df['Filtered_data'].iteritems():
for i in text.split():
print (i)