I try to make a wordcloud in R but my dataset is too big (500.000 Tweets are in it) But I get always the error message running line
m <- as.matrix(tdm)
"Error: cannot allocate vector of size 539.7 Gb"
Is there a more efficient way in R to create an Wordcloud?
Here is my code so far:
corpus <- Corpus(VectorSource(response$Tweet))
##cleaning Data
corpus <- tm_map(corpus,content_transformer(tolower))
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeWords, stopwords("english"))
corpus <- tm_map(corpus, removePunctuation)
tdm <- TermDocumentMatrix(corpus)
m <- as.matrix(tdm)
v <- sort(rowSums(m),decreasing = TRUE)
d <- data.frame(word = names(v), freq=v)
wordcloud(d$word, d$freq, random.order=FALSE, min.freq = 12, rot.per=0.35,max.words = 150,colors = brewer.pal(8, "Dark2"))