2

First of all, saying that I am new to python. At the moment, I am "translating" a lot of R code into python and learning along the way. This question relates to this one replicating R in Python (in there they actually suggest to wrap it up using rpy2, which I would like to avoid for learning purposes).

In my case, rather than exactly replicating R in python, I would actually like to learn a "pythonian" way of doing what I am describing here:

I have a long vector (40000 elements) in which each element is a piece of text, for example:

> descr
[1] "dress Silver Grey Printed Jersey Dress 100% cotton"
[2] "dress Printed Silk Dress 100% Silk Effortless style."                                                                                                                                                                                    
[3] "dress Rust Belted Kimono Dress 100% Silk relaxed silhouette, mini length" 

I then preprocess it as, for example:

# customized function to remove patterns in strings. used later within tm_map
rmRepeatPatterns <- function(str) gsub('\\b(\\S+?)\\1\\S*\\b', '', str,
                                   perl = TRUE)

# process the corpus
pCorp <- Corpus(VectorSource(descr))
pCorp <- tm_map(pCorp, content_transformer(tolower))
pCorp <- tm_map(pCorp, rmRepeatPatterns)
pCorp <- tm_map(pCorp, removeStopWords)
pCorp <- tm_map(pCorp, removePunctuation)
pCorp <- tm_map(pCorp, removeNumbers)
pCorp <- tm_map(pCorp, stripWhitespace)
pCorp <- tm_map(pCorp, PlainTextDocument)

# create a term document matrix (control functions can also be passed here) and a table: word - freq
Tdm1 <- TermDocumentMatrix(pCorp)
freq1 <- rowSums(as.matrix(Tdm1))
dt <- data.table(terms=names(freq1), freq=freq1)

# and perhaps even calculate a distance matrix (transpose because Dist operates on a row basis)
D <- Dist(t(as.matrix(Tdm1)))

Overall, I would like to know an adequate way of doing this in python, mainly the text processing.

For example, I could remove stopwords and numbers as they describe here get rid of StopWords and Numbers (although seems a lot of work for such a simple task). But all the options I see imply processing the text itself rather than mapping the whole corpus. In other words, they imply "looping" through the descr vector.

Anyway, any help would be really appreciated. Also, I have a bunch of customised functions like rmRepeatPatterns, so learning how to map these would be extremely useful.

thanks in advance for your time.

Community
  • 1
  • 1
Javier
  • 1,530
  • 4
  • 21
  • 48

1 Answers1

1

Looks like "doing this" involves making some regexp substitutions to a list of strings. Python offers a lot more power than R in this domain. Here's how I'd apply your rmRepeatedPatterns substitution, using a list comprehension:

pCorp = [ re.sub(r'\b(\S+?)\1\S*\b', '', line) for line in pCorp ]

If you wish to wrap this in a function:

def rmRepeatedPatterns(line):
    return re.sub(r'\b(\S+?)\1\S*\b', '', line)

pCorp = [ rmRepeatedPatterns(line) for line in pCorp ]

Python also has a map operator that you could use with your function:

pCorp = map(rmRepeatedPatterns, pCorp)

But list comprehensions are more powerful, expressive and flexible; as you see you can apply simple substitutions without burying them in a function.

Additional notes:

  1. If your datasets are large, you can also learn about using generators instead of list comprehensions; essentially they let you generate your elements on demand, instead of creating a lot of intermediate lists.

  2. Python has some operators like map, but if you'll be doing a lot of matrix manipulations you should read about numpy, which offers a more R-like experience.

Edit: Having looked again at your sample R script, here's how I'd do the rest of the clean-up, ie. take your list of lines, convert to lower case, drop punctuation and digits (specifically: everything that's not an English letter), and remove stopwords.

# Lower-case, split into words, discard everything that's not a letter
tok_lines = [ re.split(r"[^a-z]+", line.lower()) for line in pCorp ]
# tok_lines is now a list of lists of words

stopwordlist = nltk.corpus.stopwords.words("english") # or any other list
stopwords = set(w.lower() for w in stopwordlist)
cleantoks = [ [ t for t in line if t not in stopwords ] 
                                        for line in tok_lines ]

I wouldn't advise using either of the proposed solutions in the question you link to. Looking up things in a set is a lot faster than looking them up in a large list, and I would use a comprehension instead of filter().

Community
  • 1
  • 1
alexis
  • 48,685
  • 16
  • 101
  • 161
  • Many thanks for your answer. I also came across the gensim package which seems very very useful. It is taking some time to adapt my way of coding from R to python :) – Javier May 11 '15 at 10:57