I'm trying to find optimal hyperparams with ray:
tuner = tune.Tuner(
train,
param_space=hyperparams1,
tune_config=tune.TuneConfig(
num_samples=200,
metric="score",
mode="max",
scheduler=ASHAScheduler(grace_period=6),
),
run_config=RunConfig(stop={"score": 290},
checkpoint_config=CheckpointConfig(checkpoint_score_attribute="score"))
)
Sometimes my model overfits and it's results become worse over time, i.e., I get something like 100, 200, 220, 140, 90, 80
. Ray shows me the current "best result" in its status, but it selects the best value only from the last iterations (i.e., the best value for the mentioned result is 80
).
I'm sure that results with higher values are better, so it would be nice to select best results based on the whole history, not on the last value.
Is there a way to get force it to use the whole train history for selecting the best result? Or should I interrupt training manually when I see that model is no longer improving? Or it's already saving all the results and all I need is to filter them after it finishes?
I've seen this Checkpoint best model for a trial in ray tune and have added CheckpointConfig
to my code, but it seems like it doesn't help: I still see the last result