2

I would like to define a loss function like the following:

def custom_loss_function(y_true, y_pred):
    calculate loss based on y_true, y_pred and self.list_of_values

where variable self.list_of_values is modified outside this function every iteration and therefore, will have different values each time the custom_loss_function is "called". I know from this post that the loss function is called only once, and then "a session iteratively evaluates the loss".

My doubt is whether it is possible to work with global/external variables (with dynamic values) from the loss function that is then used like this:

model.compile(loss=custom_loss_function, optimizer=Adam(lr=LEARNING_RATE), metrics=['accuracy'])
CSR95
  • 121
  • 8
  • I found a workaround for this problem in [this post](https://stackoverflow.com/questions/50124158/keras-loss-function-with-additional-dynamic-parameter) – CSR95 May 30 '20 at 09:44

1 Answers1

1

Specifying the solution here (Answer Section) even though it is present in Comments Section, for the benefit of the Community.

The Variable, list_of_values can be considered as an Input Variable like

list_of_values = Input(shape=(1,), name='list_of_values') and define the Custom Loss function as shown below:

def sample_loss( y_true, y_pred, list_of_values ) :
    return list_of_values * categorical_crossentropy( y_true, y_pred ) 

Also, the same Global Variable can be passed as an Input to the Model like:

model = Model( inputs=[x, y_true, list_of_values], outputs=y_pred, name='train_only' )

Complete code for an example is shown below:

from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten
from keras.models import Model
from keras.losses import categorical_crossentropy

def sample_loss( y_true, y_pred, list_of_values ) :
    return list_of_values * categorical_crossentropy( y_true, y_pred ) 

x = Input(shape=(32,32,3), name='image_in')
y_true = Input( shape=(10,), name='y_true' )
list_of_values = Input(shape=(1,), name='list_of_values')
f = Conv2D(16,(3,3),padding='same')(x)
f = MaxPool2D((2,2),padding='same')(f)
f = Conv2D(32,(3,3),padding='same')(f)
f = MaxPool2D((2,2),padding='same')(f)
f = Conv2D(64,(3,3),padding='same')(f)
f = MaxPool2D((2,2),padding='same')(f)
f = Flatten()(f)
y_pred = Dense(10, activation='softmax', name='y_pred' )(f)
model = Model( inputs=[x, y_true, list_of_values], outputs=y_pred, name='train_only' )
model.add_loss( sample_loss( y_true, y_pred, list_of_values ) )
model.compile( loss=None, optimizer='sgd' )
print model.summary()

For more information, please refer this Stack Overflow Answer.

Hope this helps. Happy Learning!