2

I have implemented a custom metric based on SIM and when i try the code it works. I have implemented it using tensors and np arrays and both give the same results. However when I start fitting the model the values given back are a lot higher then the values I get when i load the weights generated by the training and applying the same function.

My function is:

def SIM(y_true,y_pred):

    n_y_true=y_true/(K.sum(y_true)+K.epsilon())    
    n_y_pred=y_pred/(K.sum(y_pred)+K.epsilon())

    return K.mean(K.sum( K.minimum(n_y_true, n_y_pred)))

When I compile the Keras model I add this to the metrics and during training it gives for example SIM: 0.7092. When i load the weights and try it the SIM score is around 0.3. The correct weights are loaded (when restarting training with these weights the same values popup). Does anybody know if I am doing anything wrong?

Why are the metrics given back during training so much higher compared to running the function over a batch?

azteks
  • 93
  • 7
  • Are there any layers that only work while training the model (Batchnorm / Dropout)? Do you take batch size into account? Your predict function might do stuff differently. You should check the actual output of your prediction function and your training function on the same input. – Thomas Pinetz Jan 09 '18 at 12:24
  • There are no layers that work differently when training and testing. I tested the function by giving 2 images and compared them and by giving it a batch of gt and inputs. Both seem to work correctly. How do I get the output of my training function outside of training? evaluate and predict both work in testmode right? – azteks Jan 09 '18 at 15:16

0 Answers0