1

I'm solving a binary segmentation problem with Keras (w. tf backend). How can I add more weight to the center of each area of mask?

I've tried dice coef with added cv2.erode(), but it doesn't work

def dice_coef_eroded(y_true, y_pred):
    kernel = (3, 3)
    y_true = cv2.erode(y_true.eval(), kernel, iterations=1)
    y_pred = cv2.erode(y_pred.eval(), kernel, iterations=1)
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + 1) / (K.sum(y_true_f) + K.sum(y_pred_f) + 1)

Keras 2.1.3, tensorflow 1.4

Sasha Korekov
  • 323
  • 3
  • 16
  • Can you please tell what do you mean by adding more weight to the center area of the mask? I am not able to understand how dice coefficient and erosion be useful for that purpose as you are just finding a similarity score there. – janu777 Feb 01 '18 at 05:37
  • I've tried to draw desired result: https://imgur.com/a/tJEFw – Sasha Korekov Feb 01 '18 at 06:16
  • https://stackoverflow.com/questions/42591191/keras-semantic-segmentation-weighted-loss-pixel-map – janu777 Feb 01 '18 at 06:33
  • This is a very similar question. I am not sure about the solution. – janu777 Feb 01 '18 at 06:34
  • It seems, that the solution above is about class balancing, not about adding weights to specific areas/pixels. – Sasha Korekov Feb 01 '18 at 14:21

2 Answers2

4

All right, the solution I found is following:

1) Create in your Iterator a method to retrieve weights' matrix (with shape = mask shape). The output must contain [image, mask, weights]

2) Create a Lambda layer containing loss function

3) Create an Identity loss function

Example:

def weighted_binary_loss(X):
    import keras.backend as K
    import keras.layers.merge as merge
    y_pred, weights, y_true = X
    loss = K.binary_crossentropy(y_pred, y_true)
    loss = merge([loss, weights], mode='mul')
    return loss

def identity_loss(y_true, y_pred):
    return y_pred

def get_unet_w_lambda_loss(input_shape=(1024, 1024, 3), mask_shape=(1024, 1024, 1)):
    images = Input(input_shape)
    mask_weights = Input(mask_shape)
    true_masks = Input(mask_shape)
    ...
    y_pred = Conv2D(1, (1, 1), activation='sigmoid')(up1) #output of original unet
    loss = Lambda(weighted_binary_loss, output_shape=(1024, 1024, 1))([y_pred, mask_weights, true_masks])
    model = Model(inputs=[images, mask_weights, true_masks], outputs=loss)
Sasha Korekov
  • 323
  • 3
  • 16
0

I'm implementing this solution but I wonder what should be the ground truth that we must give to the network. That is, now the output is the loss, and we want the loss to be 0, so should we train the network as follows?

model = get_unet_w_lambda_loss()
model.fit([inputs, weights, masks], zero_images)
eLearner
  • 61
  • 5
  • You could actually use simplier solution without lambda layers, just create a custom loss like `dice_coef_weighted_one_class` from here: https://github.com/kohrah/DSBowl2018/blob/master/src/zoo_losses_K.py – Sasha Korekov Jun 16 '20 at 14:33
  • Creating custom loss layers that have an input different from `y_pred, y_true` didn't work for me when using tensorflow 2 – eLearner Jun 18 '20 at 13:28