8

When I run Keras Tuner search, the code runs for some epochs and then says: 'INFO:tensorflow:Oracle triggered exit'.

What does this mean? I am still able to extract best hyperparameters. Is it due to early stopping? I have tried both randomsearch and hyperband.

endorphinus
  • 119
  • 1
  • 1
  • 8

10 Answers10

7

You can solve this with:

tuner = RandomSearch(
    tune_rnn_model,
    objective='val_accuracy',
    seed=SEED, 
    overwrite=True,
    max_trials=MAX_TRIALS,
    directory='project')

To begin a new search and ignore any prior results, we set overwrite=True. Alternatively, you can delete the directory folder by using this code:

!rm -r <directory folder>
nathan liang
  • 1,000
  • 2
  • 11
  • 22
Rezuana Haque
  • 608
  • 4
  • 14
4

Probably the reason is, directory is already created.

Try following steps:

  1. Change the directory name.
  2. Restart the kernel.
  3. re-run all the codes.
  • Welcome to SO! Unfortunately your answer doesn't add anything to the most voted one. Please edit it providing additional info and/or a code sample. – nico9T Apr 14 '21 at 20:12
1

Try adding the directory argument where you have defined your tuner, or if you have already added directory arg, try changing the value of that arg. regard the last line in the below example of RandomSearch tuner:

tuner = RandomSearch(
    tune_rnn_model,
    objective='val_accuracy',
    seed=SEED,
    max_trials=MAX_TRIALS,
    directory='**change-this-value**',
)
Milad Ce
  • 61
  • 5
1

I solved this issue by setting these two conditions in my Tuner:

  • overwrite = False
  • a value for max_trials in the Oracle greater than the one I used until the error "Oracle triggered exit" occurred (I'm using kerastuner.oracles.BayesianOptimization Oracle)
1

I found the same issue and I found a very easy solution. It can be very easy if you just remove two files from the directory generated by the keras tunner. oracle.json and other .json files and Run it again it will work.

0

I believe this is occuring because you are working on a small dataset which is resulting in a large number of collisions while performing random search.

Please try reducing the number of 'max-trials' in your random-search, that may fix the issue.

vbhargav875
  • 837
  • 8
  • 15
  • the max trials is set to 1, and the dataset has 28 features, with 20 000 training instances and 6 000 validation instances. When I run hyperband search it runs some time before getting the message but when i run randomsearch i get the message instantly – endorphinus Jun 08 '20 at 09:24
0

I had the same issue with the Hyperband search.

For me the issue was solved by removing the "Early Stopping" callback from the tuner search.

Carlos
  • 5,991
  • 6
  • 43
  • 82
0

For me i resolved this issue by removing the hp = HyperParameters() out of the build_model function. I mean, initialize the hp variable outside of the build model function.

0

I had this issue because I named two hyperparameters with the same names.

E.g., in the build_model(hp) function I had:

def build_model(hp):
   ...
   a = hp.Choice('embedding_dim', [32, 64])
   b = hp.Choice('embedding_dim', [128, 256])
   ...

A final note is to be careful to have more hyperparameters' combinations that trials. In my example of build_model function I have 4 possible combination of hyperparameters (2*2), so max_trials <= 4.

I hope it will help someone.

Loris Pilotto
  • 247
  • 1
  • 8
0

I had the same question and didn't find what I was looking for.

If the tuner have finished at trial, that is lower then your max_trial parameter, the most probable reason is that the tuner have already tried all the combinations possible for the field of hyperparameters that you set before.

Example: I have 2 parameters for tuner to try, fist can optain 8 values, second 18. If you multiply these two it gives you 144 combinations and that is exactly the number of trials that the tuner stopped at.

Vojtech Stas
  • 631
  • 8
  • 22