0

I've been running AutoML on some Shopify data stored in BQ. After training, the evaluation page shows great performance, but when running ml.evaluate on either training or test sets it predicts only 1 class and the metric gets off also with precision~80%, recall 100%, roc=0.

Also another thing is that confusion matrix seems completely balanced as shown in link below, although the training dataset is imbalanced with ~80% concentrated in one class. Couldn't find in the documentation whether this is expected?

Wanted to check if someone encountered similar behavior and are there any workarounds?

evaluation page

apantovic
  • 3
  • 3

0 Answers0