-1

I have a dataset of 200+ pdf's that I converted into a corpus. I'm using the TM package for R for text pre-processing and mining. So far, I've successfully created the DTM (document term matrix) and can find the x most frequently occuring terms. The goal of my research however, is to check if certain terms are used in the corpus. I'm not so much looking for the most frequent terms, but have my own list of terms that I want to check if they occur, and if so, how many times.

So far, I've tried this:

function <- content_transformer(function(x, pattern)regmatches(x,gregexpr(pattern, x, perl=TRUE, ignore.case = TRUE)))
keep = "word_1|word_2"
tm_map(my_corpus, function, keep)[[1]]

and these:

str_detect(my_corpus, "word_1", "word_2" )
str_locate_all(my_corpus, "word_1", "word_2")
str_extract(my_corpus, "funds")

This last one seems to come closest giving the output: [1] "funds" NA NA

Neither seems to be giving me what I need.

Bammers
  • 47
  • 1
  • 7

1 Answers1

2

You can use the option dictionary when you create your DocumentTermMatrix. See in the example code how it works. Once in the documenttermmatrix form or in a data.frame form you can use aggregation functions if you don't need the word counts per document.

library(tm)

data("crude")
crude <- as.VCorpus(crude)
crude <- tm_map(crude, content_transformer(tolower))

my_words <- c("oil", "corporation")

dtm <- DocumentTermMatrix(crude, control=list(dictionary = my_words))

# create data.frame from documenttermmatrix
df1 <- data.frame(docs = dtm$dimnames$Docs, as.matrix(dtm), row.names = NULL)
head(df1)
   docs corporation oil
1   127           0   5
2   144           0  11
3   191           0   2
4   194           0   1
5   211           0   1
6   236           0   7
phiver
  • 23,048
  • 14
  • 44
  • 56
  • Hi, thank you very much. This works like a charm! I actually created a parallel dtm, for the purpose of my research. Allow me to ask a follow-up question. My output doesn't use the name of the documents, something I noticed earlier. It uses 'character(0)' to name my documents. They are all named like this in my folder: LU876673445 – Bammers Jul 31 '18 at 08:03
  • If you read in your pdf's like this `VCorpus(DirSource("directory where pdfs are", pattern = ".pdf"), list(reader = readPDF))` the document names are available in the corpus and documenttermmatrices. Otherwise you need to add them via `dtm$dimnames$Docs <- list_of_my_document_names` – phiver Jul 31 '18 at 11:49
  • yes, I've read them in exactly like that. Can you elaborate on how to 'add them'? where in this code do I have to put that? And do I need to type that list manually? `tech_terms <- c("volatiliteit", 'alfa')` next I did `dtm_tech_terms <- DocumentTermMatrix(my_corpus, control = list(dictionary = tech_terms))` and then `df_tech_terms <- data.frame(Docs = dtm$dimnames$Docs, as.matrix(dtm_tech_terms), row.names = NULL )` – Bammers Jul 31 '18 at 13:34
  • You can do this after creating `dtm_tech_terms`. You can use `my_files <- list.files(directory where pdfs are)` to create a vector with the names of all your pdf files. You might want to remove .pdf from the end of them with `gsub`. – phiver Jul 31 '18 at 13:56
  • https://stackoverflow.com/questions/51650023/how-to-search-for-specific-n-grams-in-a-corpus-using-r – Bammers Aug 02 '18 at 09:33