I am creating my own loss function (that I want to use in eager execution in Keras). I would like to add to it a term similar to a l1 loss function.
That is the loss function I am using now
def loss(model, x, y, x_dev, y_dev, variables):
y_ = model(x)
y_dev_ = model(x_dev)
y_temp = 1.5
return loss_mae(y_true=y, y_pred=y_)+y_temp*
K.mean(tf.convert_to_tensor(variables))
with
import keras.backend as K
def loss_mae(y_true, y_pred):
return K.mean(K.abs(y_pred-y_true))
my idea is to add to my loss function a constant (y_temp
) and then I would like to multiply it for the trainable variables (to achieve something similar to a l1 regularisation term).
I tried passing to the loss()
function model.trainable_variables
but that does not work and I get a
TypeError: can't multiply sequence by non-int of type 'numpy.float32'
anyone has any suggestions?