I know that it's possible to send hyper-params as a dictionary to Trains.
But can it also automagically log hyper-params that are logged using the TF2 HParams module?
Edit: This is done in the HParams tutorial using hp.hparams(hparams)
.
I know that it's possible to send hyper-params as a dictionary to Trains.
But can it also automagically log hyper-params that are logged using the TF2 HParams module?
Edit: This is done in the HParams tutorial using hp.hparams(hparams)
.
Disclaimer: I'm part of the allegro.ai Trains team
From the screen-grab, it seems like multiple runs with different hyper-parameters , and a parallel coordinates graph for display. This is the equivalent of running the same base experiment multiple times with different hyper-parameters and comparing the results with the Trains web UI, so far so good :)
Based on the HParam interface , one would have to use TensorFlow in order to sample from HP, usually in within the code. How would you extend this approach to multiple experiments? (it's not just automagically logging the hparams but you need to create multiple experiments, one per parameters set)
Wouldn't it make more sense to use an external optimizer to do the optimization? This way you can scale to multiple machines, and have more complicated optimization strategies (like Optuna), you can find a few examples in the trains examples/optimization.