6

after running automl (classification of 3 classes), I can see a list of models as follows: model_id mean_per_class_error StackedEnsemble_BestOfFamily_0_AutoML_20180420_174925 0.262355 StackedEnsemble_AllModels_0_AutoML_20180420_174925 0.262355 XRT_0_AutoML_20180420_174925 0.266606 DRF_0_AutoML_20180420_174925 0.278428 GLM_grid_0_AutoML_20180420_174925_model_0 0.442917

but mean_per_class_error is not a good metric for my case, where classes are unbalanced (one class has very small population). How to fetch details of non-leader models and calculate other metrics? Thanks.

python version: 3.6.0

h2o version: 3.18.0.5

slowD
  • 339
  • 2
  • 13

2 Answers2

5

actually just figured this out myself (assuming aml is the h2o automl object after training):

for m in aml.leaderboard.as_data_frame()['model_id']: print(m) print(h2o.get_model(m))

slowD
  • 339
  • 2
  • 13
4

You can also grab the corresponding model you're interested in using the following line:

model6 = h2o.get_model(aml.leaderboard.as_data_frame()['model_id'][6])

where 6 is the index number of the model in the leaderboard.

Nev
  • 117
  • 9