I just want to define a loss function to test.
I used as example the euclidean distance:
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
Since I have to obtain from the net list of couples (x,y), I want to test this outside the NN.
So I used:
y_true = [[0., 1.], [0., 0.]]
y_pred = [[1., 1.], [1., 0.]]
With just:
edk = euc_dist_keras(y_true, y_pred)
I obtained the error: TypeError: unsupported operand type(s) for -: 'list' and 'list'
So I used:
y_true_array = np.array(y_true)
y_pred_array = np.array(y_pred)
edk = euc_dist_keras(y_true_array, y_pred_array)
But obtained:
Tensor("Sqrt:0", shape=(2, 1), dtype=float64)
Instead of expected output value: 1
How to obtain the desired value? The same euc_dist_keras, used in:
model.compile(loss=euc_dist_keras, optimizer=opt)
will work exactly in the same way I'm testing it?
Thanks!
Added:
with tf.Session() as sess: print(edk.eval())
I obtained: [[1.] [1.]]
I expected: 1.
Maybe I make some mistake in the def? Or the mean of all samples is made just when I use it when compile the model?