0

I have a custom loss f = x + y where I have constraints such that while optimising f, x should be within a range of (0.10, 0.2) and y within the range of (0.6, 0.1), y is the mean square difference between the actual and predicted labels and x is different types of jobs. The model is not trained based on x; however, it's required to be optimized on getting different types of jobs within the prediction.

I came across Scipy.optimize: how to restrict argument values on how scipy.optimize can be used with bounds of the parameter of a function. However, my main problem is that I have a custom loss function total_loss(y_pred, y_true) and it works with Keras as a loss function using "SGD" as optimizer. Now, for incorporating the bound range of the parameter, I would like to use scipy.optimize.minimise with Keras. Any direction on how to use scipy.optimize on model.compile in Keras?

Yasmin
  • 931
  • 3
  • 14
  • 35
  • Please be more specific: What are `x` and `y`? Are they coefficients? How are they incorporated into loss function? As for: "I would like to use scipy.optimize.minimise with Keras" No, you can't do that. You need to use built-in optimizers of Keras or define a new optimizer based on (Keras/TF) tensor operations. – today Aug 05 '19 at 13:07
  • I have added more details to the question. – Yasmin Aug 05 '19 at 13:17

1 Answers1

0

You can use a custom training loop. The training loop can collect the gradients considering your keras loss function and you can then choose to optimize the weights using a given constraint.

I've implemented something similar; this code calls SciPy minimize to optimize a model's variables.

Pedro Marques
  • 2,642
  • 1
  • 10
  • 10