I recently used Bag-of-Words classifier to make a Document Matrix with 96% terms. Then I used a Decision Tree to train by model on the bag of words input to make a prediction whether the sentence is important or not. The model performed really well on the test dataset, but when I used an out-of sample dataset, it is not able to predict. Instead it gives error.
Here's the model that I made in R
library('caTools')
library('tm')
library('rpart')
library(rpart.plot)
library(ROCR)
data= read.csv('comments.csv', stringsAsFactors = FALSE)
corpus = Corpus(VectorSource(data$Word))
# Pre-process data
corpus <- tm_map(corpus, tolower)
corpus <- tm_map(corpus, stemDocument)
# Create matrix
dtm = DocumentTermMatrix(corpus)
# Remove sparse terms
#dtm = removeSparseTerms(dtm, 0.96)
# Create data frame
labeledTerms = as.data.frame(as.matrix(dtm))
# Add in the outcome variable
labeledTerms$IsImp = data$IsImp
#Splitting into train and test data using caTools
set.seed(144)
spl = sample.split(labeledTerms$IsImp , 0.60)
train = subset(labeledTerms, spl == TRUE)
test = subset(labeledTerms, spl == FALSE)
#Build CART Model
CART = rpart(IsImp ~., data=train, method="class")
This works totally fine on the testing dataset which around 83% accuracy. However, when I use this cart model to predict on a out of sample dataset, it gives me error.
head(train)
terms A B C D E F..............(n terms)
Freqs 0 1 2 1 3 0..............(n terms)
head(test)
terms A B C D E F..............(n terms)
Freqs 0 0 1 1 1 0..............(n terms)
data_random = read.csv('comments_random.csv', stringsAsFactors = FALSE)
head(data_random)
terms A B D E F H..............(n terms)
Freqs 0 0 1 1 1 0..............(n terms)
The error I get is "can't find C" in data_random. I don't know what I should do to make this work. Is laplace smoothing a way here??