I wanted to add custom loss for Deeplab v3 that would work for NOT one hot encoding labels but for saliency prediction. So instead of Deeplab loss implementation that you see below:
label = tf.to_int32(label > 0.2)
one_hot_labels = slim.one_hot_encoding(label, num_classes, on_value=1.0, off_value=0.0)
tf.losses.softmax_cross_entropy(one_hot_labels, logits)
I used this implementation:
softmax = tf.log(tf.nn.softmax(logits))
cross_entropy = -tf.reduce_sum(label*softmax, reduction_indices=[1])
tf.losses.add_loss(tf.reduce_mean(cross_entropy))
Trained ~1000 epochs with 5 images and got this result:
- Input simplified image with padding - https://i.stack.imgur.com/07GsL.png
- Ground truth labels - https://i.stack.imgur.com/ttEZi.png
- Custom loss result](https://i.stack.imgur.com/cNooX.png
- Cross-entropy using one hot encoding result - https://i.stack.imgur.com/LEhl3.png
Also, tried several learning rates, but it doesn't change the result for custom loss.