1

I'm using LDA for topic modeling:

dtm <- DocumentTermMatrix(docs)

However, there are rows that all elements in dtm are zero. So I followed the instruction in here

ui = unique(dtm$i)
dtm.new = dtm[ui,]

And, then LDA works and I have the topics and everything. My next attempt is to use LDAvis as recommended in here. Source code:

    topicmodels_json_ldavis <- function(fitted, corpus, doc_term){
  # Required packages
  library(topicmodels)
  library(dplyr)
  library(stringi)
  library(tm)
  library(LDAvis)

  # Find required quantities
  phi <- posterior(fitted)$terms %>% as.matrix
  theta <- posterior(fitted)$topics %>% as.matrix
  vocab <- colnames(phi)
  doc_length <- vector()
  for (i in 1:length(corpus)) {
    temp <- paste(corpus[[i]]$content, collapse = ' ')
    doc_length <- c(doc_length, stri_count(temp, regex = '\\S+'))
  }
  temp_frequency <- inspect(doc_term)
  freq_matrix <- data.frame(ST = colnames(temp_frequency),
                            Freq = colSums(temp_frequency))
  rm(temp_frequency)

  # Convert to json
  json_lda <- LDAvis::createJSON(phi = phi, theta = theta,
                                 vocab = vocab,
                                 doc.length = doc_length,
                                 term.frequency = freq_matrix$Freq)

  return(json_lda)
}

When I call topicmodels_json_ldavis function, I receive this error:

Length of doc.length not equal to the number of rows in theta; 
both should be equal to the number of documents in the data.

I checked the length of theta and doc.length. They are different. I assume because I pass the corpus (docs) which makes a dtm with (at least) a zero row. In order for the corpus to match with doc_term_matrix, I decided to make a new corpus from dtm.new as suggested in here. Source code:

dtm2list <- apply(dtm, 1, function(x) {
  paste(rep(names(x), x), collapse=" ")
})

myCorp <- VCorpus(VectorSource(dtm2list))

I even made a new ldaOut with dtm.new and passed the following parameters to topicmodels_json_ldavis: ldaOut22, myCorp, dtm.new

I still receive the error message that theta and doc.length must have the same length.

wastepaper
  • 53
  • 7

1 Answers1

1

I had the exact same problem, I was able to remove rows with all zero-vectors for LDA analysis, but then tumbled into row-count of the sparse matrix not matching anymore the row-count of Documents for LDAvis. I've solved it, unfortunately only for Python, but you may use the following approach as a starting point:

Lets see what I got first:

print(f'The tf matrix:\n {cvz.toarray()[:100]}\n')
sparseCountMatrix = np.array(cvz.toarray())
print(f'Number of non-zero vectors: {len(x[x>0])} Number of zero vectors: {len(x[x==0])}\n')
print(f'Have a look at the non-zero vectors:\n{x[x>0][:200]}\n')
print(f'This is our sparse matrix with {x.shape[0]} (# of documents) by {x.shape[1]} (# of terms in the corpus):\n{x.shape}')

Output:

The tf matrix:
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]

Number of non-zero vectors: 4721 Number of zero vectors: 232354

Have a look at the non-zero vectors:
[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]

This is our sparse matrix with 545 (# of documents) by 435 (# of terms in the corpus):
(545, 435)

How many rows contain all zero vectors?

len(list(np.array(sparseCountMatrix[(sparseCountMatrix==0).all(1)])))

Output: 12

How many rows contain at least one non-zero vector?

len(list(np.array(sparseCountMatrix[~(sparseCountMatrix==0).all(1)])))

Output: 533

Remove the 12 rows which contain all zero vectors for LDA Analysis:

cleanedSparseCountMatrix = np.array(sparseCountMatrix[~(sparseCountMatrix==0).all(1)])

Also remove these documents from original Pandas Series (tokens), so document count matches sparse matrix row count, which is important to visualize LDA results with pyLDAVis:

First, to get the index position of rows with all zero vectors, use np.where:

indexesToDrop = np.where((sparseCountMatrix==0).all(1))
print(f"Indexes with all zero vectors: {indexesToDrop}\n")

Output:

Indexes with all zero vectors: (array([ 47,  77,  88,  95, 106, 109, 127, 244, 363, 364, 367, 369],
    dtype=int64),)

Second, use this list of indexes to drop original rows in Pandas series with series.drop:

data_tokens_cleaned = data['tokens'].drop(data['tokens'].index[indexesToDrop])

New length of cleaned tokens (should match sparse matrix length!):

len(data_tokens_cleaned)

Output:

533

This is our cleaned sparse matrix, ready for LDA analysis:

print(cleanedSparseCountMatrix.shape)

Output: (533, 435)

Alex
  • 2,784
  • 2
  • 32
  • 46