I am currently trying to extend the code suggested in this question: How to get layer weight while training?
By using the code below:
class CustomCallback(keras.callbacks.Callback):
def on_train_batch_begin(self, batch, logs=None):
print(model.layers[2].get_weights())
history = model.fit(train_x, train_y,
epochs=200,
validation_data=(test_x, test_y),
verbose=1,
callbacks=[CustomCallback()])
I can get the weight of a specific layer of the neural network while training (at the beginning of each iteration of gradient descent for each batch).
My question is, can I use this information and allow loss function's access to it? I would like to use this information to define a new expression to be optimized as a part of custom loss function to be used.
In other words, I would like to access information obtained by callback during each training step for each batch, and feed this information to a custom loss function to define one of objective function term to be optimized.
For instance, consider the following custom loss:
def Custom_Loss(y_true, y_pred):
return f(y_true, y_pred)
I would like to modify the above defined loss to be as follows:
def Custom_Loss(y_true, y_pred):
return f(y_true, y_pred) + g(layer_2_weight, y_pred)
One very noob trick that I came up with (but no idea how to do so) is define a "dummy" variable which is redefined inside the callback and use this for the loss function definition, but I am not sure if this would work or if this is a right approach to follow.
If the above-described task is achievable, would it be possible to get some example code or any post in stackoverflow (I tried to find it, but I could not find any. If there is any, please let me know)?