I am trying to do Bigram Tokenization on csv file. But it is taking a lot of time. I have checked my code with the existing codes in SO. I couldn't find any fault in it. My code is displayed below:
library(tm)
library(RWeka)
library(tmcn.word2vec)
library(openNLP)
library(NLP)
data <- read.csv("Train.csv", header=T)
corpus <- Corpus(VectorSource(data$EventDescription))
corpus <- tm_map(corpus,content_transformer(tolower))
corpus <- tm_map(corpus,removePunctuation)
corpus <- tm_map(corpus,PlainTextDocument)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
dtm <- DocumentTermMatrix(corpus,control=list(tokenize=BigramTokenizer))
Can anyone help me in solving this problem? Thanks in advance