0

After scraped online, I got the children's books in text format from Gutenberg.com. Now I would like to analyze the words. But I failed to do the tokenization since the content turned out to be list of lists.

The content is sth like below:

raw[0]

['ALICE’S ADVENTURES IN WONDERLAND', 'Lewis Carroll', 'THE MILLENNIUM FULCRUM EDITION 3.0', 'CHAPTER I. Down the Rabbit-Hole', 'Alice was beginning to get very tired of sitting by her sister on the', 'bank, and of having nothing to do: once or twice she had peeped into the', 'book her sister was reading, but it had no pictures or conversations in', 'it, ‘and what is the use of a book,’ thought Alice ‘without pictures or', 'conversations?’', 'So she was considering in her own mind (as well as she could, for the', 'hot day made her feel very sleepy and stupid), whether the pleasure', 'of making a daisy-chain would be worth the trouble of getting up and', 'picking the daisies, when suddenly a White Rabbit with pink eyes ran', 'close by her.', '‘There might be some sense in your knocking,’ the Footman went on', ...]

import nltk
import pickle
    with open('tokens.data', 'rb') as filehandle:  
    # read the data as binary data stream
    raw = pickle.load(filehandle)
raw[0]

len(raw)    ->   407   Which means we got 407 children's book. 
type(raw)   ->   List   Each list stands for one book. 

from nltk.tokenize import sent_tokenize, word_tokenize
tokenized_sents = [word_tokenize(i) for i in raw[0]]
for i in tokenized_sents:
      print (i)


['ALICE', '’', 'S', 'ADVENTURES', 'IN', 'WONDERLAND']
['Lewis', 'Carroll']
['THE', 'MILLENNIUM', 'FULCRUM', 'EDITION', '3.0']
......
['remembering', 'her', 'own', 'child-life', ',', 'and', 'the', 'happy', 
'summer', 'days', '.']
['THE', 'END']

The thing is that I only could do like raw[0], raw[1], ...... Then how to apply lambda in this ?

Sandy
  • 359
  • 4
  • 14
  • https://stackoverflow.com/questions/21361073/tokenize-words-in-a-list-of-sentences-python – ice1x Apr 16 '19 at 16:58
  • What did you put in `tokens.data`? Without that information, only a mind reader can help you. – alexis Apr 16 '19 at 17:02
  • From the gutenberg site you should have gotten plain text, not a pickle. I recommend you go to http://nltk.org/book` and start reading. The nltk also makes available a number of gutenberg books as a ready to use corpus, so you can start easy. – alexis Apr 16 '19 at 17:09
  • If you want to flatten your list of lists into a single list, you may want to see https://stackoverflow.com/questions/952914/how-to-make-a-flat-list-out-of-list-of-lists – GreenMatt Apr 16 '19 at 17:14
  • Inside tokens.data are 407 children's books in list format. So that is the reason why we have to use pickle. – Sandy Apr 16 '19 at 17:34

1 Answers1

0

If you want to tokenize the entire content, then you can try something like this:

content = ' '.join(map(lambda l: ' '.join(l), raw))
tokens = word_tokenize(content)

First will merge all lists into one text and second will tokenize it.

ololobus
  • 3,568
  • 2
  • 18
  • 23
  • Just edited it. Could you give some hints? Since now I only could do raw[0], raw[1], how to apply lambda? – Sandy Apr 16 '19 at 17:11
  • Sure, `map` applies `join` to each `'raw[0], raw[1], etc` via `lambda` and returns list of strings. Then we finally join those strings into one. – ololobus Apr 17 '19 at 09:34