4

After running h2o.deeplearning for a binary classification problem I then run the h2o.predict and obtain the following results

  predict        No       Yes
1      No 0.9784425 0.0215575
2     Yes 0.4667428 0.5332572
3     Yes 0.3955087 0.6044913
4     Yes 0.7962034 0.2037966
5     Yes 0.7413591 0.2586409
6     Yes 0.6800801 0.3199199

I was hoping to get a confusion matrix with only two rows. But this seems to be quite different. How do I interpret these results? Is there any way of getting something like a confusion matrix with actual and predicted values and error percentage?

Sujay DSa
  • 1,172
  • 2
  • 22
  • 37

1 Answers1

6

You can either extract that information from the model fit (for example, if you pass a validation_frame), or you can use h2o.performance() to get obtain a H2OBinomialModel performance object and extract the confusion matrix using h2o.confusionMatrix().

Example:

fit <- h2o.deeplearning(x, y, training_frame = train, validation_frame = valid, ...)
h2o.confusionMatrix(fit, valid = TRUE)

Or

fit <- h2o.deeplearning(x, y, train, ...)
perf <- h2o.performance(fit, test)
h2o.confusionMatrix(perf)
Erin LeDell
  • 8,704
  • 1
  • 19
  • 35