I am trying to implement the cross entropy loss between two images for a fully conv Net. I have both my training and input images in the range 0-1. Now, I am trying to implement this for only one class of images. To illustrate say I have different orange pictures but only orange pictures. I've built my model and I have implemented a cross entropy loss function.
def loss_func_entropy(logits,y):
logits=tf.reshape(logits,[BATCH_SIZE*480*640,1])
y=tf.reshape(y,[BATCH_SIZE*480*640,1])
print (logits.get_shape(),y.get_shape())
return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=y,dim=0) )
Obviously I am not doing this right because my loss function keeps increasing. Things to note is that logits and y are both 2D. i reshape them into a single vector and try to do cross entropy.