0

I have a Keras model as follows. How to calculate the gradient of the last 3 output neurons with respect to the MSE loss, given an input (shape=(1,4,100,100,3)) and a target value Y.

def buildmodel(input_shape, action_size, learning_rate):
state_input = Input(shape=(None, img_rows, img_cols, channels))
x = Bidirectional(ConvLSTM2D(filters = 16, kernel_size = (3,3)))(state_input)
x = MaxPooling2D((2,2))(x)
x = Flatten()(x)
x = Dense(3)(x)   

model = Model(inputs=state_input, outputs=x)
adam = Adam(lr=learning_rate)
model.compile(loss='mse',optimizer=adam)
return model
Saikat
  • 1,209
  • 3
  • 16
  • 30
  • Duplicate of this https://stackoverflow.com/questions/39561560 – Paloha Dec 28 '21 at 19:18
  • Thanks for pointing this. But, gradients = k.gradients(outputTensor, listOfVariableTensors) in the above links shows the gradient with respect to all weights in the network. I want to find the gradient of the output with respect to loss. So I changed, gradients = k.gradients(loss, model.trainable_weights[-1]), loss = K.mean(K.square(model.output-y_true)). But this is not working. What's wrong with my code? – Saikat Dec 29 '21 at 14:14
  • Ok, so you might be interested in reading this blog's "Gradients" section. https://keras.io/getting_started/intro_to_keras_for_researchers/. I am on my phone, so unable to give you a running example, but the examples with gradient tape are IMO pretty self explanatory. – Paloha Dec 29 '21 at 14:52

0 Answers0