4

I need help with calculating derivatives for model output wrt inputs in Keras.

I want to add a regularization functional to the loss function. The regularizer contains the derivative of the classifier function. So I tried to take the derivative of model output. The model is a MLP with one hidden layer. The dataset is MNIST. When I compile the model and take the derivative, I get [None] as the result instead of the derivative function.

I have seen a similar post, but didn't get answer there either: Taking derivative of Keras model wrt to inputs is returning all zeros

Here is my code. Please help me to solve the problem.

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K

num_hiddenNodes = 1024
num_classes = 10

(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(-1, 28 * 28)
X_train = X_train.astype('float32')
X_train /= 255
y_train = keras.utils.to_categorical(y_train, num_classes)

model = Sequential()
model.add(Dense(num_hiddenNodes, activation='softplus', input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
logits = model.output
# logits = model.layers[-1].output
print(logits)
X = K.identity(X_train)
# X = tf.placeholder(dtype=tf.float32, shape=(None, 784))
print(X)
print(K.gradients(logits, X))

Here is the output for the code. The two parameters are Tensors. The gradients function returns None.

Tensor("dense_2/Softmax:0", shape=(?, 10), dtype=float32)
Tensor("Identity:0", shape=(60000, 784), dtype=float32)
[None]
Romann
  • 89
  • 2
  • 7

1 Answers1

5

You are computing the gradients respect to X_train, which is not an input variable to the computation graph. Instead you need to get the symbolic input tensor to the model, so try something like:

grads = K.gradients(model.output, model.input)
Dr. Snoopy
  • 55,122
  • 7
  • 121
  • 140
  • Thanks for the answer. A question first, should K.gradient(model.input, model.output) be K.gradient(model.output, model.input) in your answer? Then, I have tried to use model.input as a parameter. Now it works as it returns []. – Romann Mar 16 '18 at 19:48
  • I wonder what is the difference between model.input, K.identity(X_train), and tf.placeholder(dtype=tf.float32, shape=(None, 784). They are all tensors with the same shape: Tensor("dense_1_input:0", shape=(?, 784), dtype=float32), Tensor("Identity:0", shape=(60000, 784), dtype=float32), Tensor("Placeholder:0", shape=(?, 784), dtype=float32). But only the first one can be used to get the gradient. – Romann Mar 16 '18 at 19:49
  • @user7367951 Yes the parameter are swapped. K.identity is not connected to the computation graph, you can't use it for anything actually. – Dr. Snoopy Mar 16 '18 at 20:18
  • It's `K.gradients` not `K.gradient` – guillefix Dec 21 '18 at 19:57
  • `tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.` – quant Jan 04 '20 at 12:21
  • @quant It is very rude to just throw error messages at people, ask your own questions if you have issues. – Dr. Snoopy Jan 04 '20 at 12:24
  • @MatiasValdenegro I didn't meant to be rude, I'm just pointing out that this is what happens if you try to run the code in your answer. – quant Jan 04 '20 at 12:29
  • 1
    @quant No, there is no information at all, details, versions, etc, so your comment is quite useless. – Dr. Snoopy Jan 04 '20 at 12:31
  • tensorflow 2.0.0 – quant Jan 04 '20 at 12:33