6

I'm trying to do multi-class semantic segmentation with a unet design. Similar to the unet paper, I'd like to do make a loss function to overweight borders (page 5).

As such I'd like to make a custom loss map for each image where the borders between objects are overweighted. I am using categorical cross-entropy where I flatten the image before the loss function as here. I would be fine making the pixel loss mask but I am wondering how, if possible, to multiple the loss by the pixel mask.

Community
  • 1
  • 1
TSW
  • 661
  • 2
  • 7
  • 11
  • Could you provide more details about how you store this additional weights and setup of your problem? – Marcin Możejko Mar 04 '17 at 17:44
  • I could store the weights in whatever way that would be necessary. But I was thinking of storing them in an array the same size as the output image where each pixel in the array is the loss factor for that ground-truth pixel. In that way, I could deal with class imbalances and the border issues as one weight for each pixel. The set up as standard semantic segmentation problem, similar to unet except there are multiple classes. – TSW Mar 04 '17 at 22:14
  • I've got the same problem, solved with adding a loss as a layer in model: https://stackoverflow.com/questions/48555820/keras-binary-segmentation-add-weight-to-loss-function/48577360#48577360 – Sasha Korekov Feb 02 '18 at 07:06

3 Answers3

0

If you know how to do this on a 2d map you could always use multiple outputs and use the custom pixel mask additionally to the cross entropy loss. An example implementation of multiple losses for a u-shaped network can be found here: https://github.com/EdwardTyantov/ultrasound-nerve-segmentation

Thomas Pinetz
  • 6,948
  • 2
  • 27
  • 46
0

Here is an implementation of weight maps in tensorflow http://tf-unet.readthedocs.io/en/latest/_modules/tf_unet/unet.html you should be able to adapt it for keras in a custom loss function. I report the relevant code:

    def _get_cost(self, logits, cost_name, cost_kwargs):

    Optional arguments are: 
    class_weights: weights for the different classes in case of multi-class imbalance
    regularizer: power of the L2 regularizers added to the loss function

    flat_logits = tf.reshape(logits, [-1, self.n_class])
    flat_labels = tf.reshape(self.y, [-1, self.n_class])
    if cost_name == "cross_entropy":
        class_weights = cost_kwargs.pop("class_weights", None)

        if class_weights is not None:
            class_weights = tf.constant(np.array(class_weights, dtype=np.float32))

            weight_map = tf.multiply(flat_labels, class_weights)
            weight_map = tf.reduce_sum(weight_map, axis=1)

            loss_map = tf.nn.softmax_cross_entropy_with_logits(flat_logits, flat_labels)
            weighted_loss = tf.multiply(loss_map, weight_map)

            loss = tf.reduce_mean(weighted_loss)

        else:
            loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=flat_logits, 
                                                                          labels=flat_labels))}
0

Assuming multi-class prediction, in your data generator, concatenate your weight maps as extra channels in third dimension for each element in a batch. Next, in your loss function, extract the weight maps for each batch element, multiply them (just use *) with the prediction, and compute log loss.

Eagle
  • 1,187
  • 5
  • 22
  • 40