I am attempting object segmentation using a custom loss function as defined below:
def chamfer_loss_value(y_true, y_pred):
# flatten the batch
y_true_f = K.batch_flatten(y_true)
y_pred_f = K.batch_flatten(y_pred)
# ==========
# get chamfer distance sum
// error here
y_pred_mask_f = K.cast(K.greater_equal(y_pred_f,0.5), dtype='float32')
finalChamferDistanceSum = K.sum(y_pred_mask_f * y_true_f, axis=1, keepdims=True)
return K.mean(finalChamferDistanceSum)
def chamfer_loss(y_true, y_pred):
return chamfer_loss_value(y_true, y_pred)
y_pred_f
is the result of my U-net. y_true_f
is the result of a euclidean distance transform on the ground truth label mask x
as shown below:
distTrans = ndimage.distance_transform_edt(1 - x)
To compute the Chamfer distance, you multiply the predicted image (ideally, a mask with 1 and 0) with the ground truth distance transform, and simply sum over all pixels. To do this, I needed to get a mask y_pred_mask_f
by thresholding y_pred_f
, then multiply with y_true_f
, and sum over all pixels.
y_pred_f
provides a continuous range of values in [0,1], and I get the error None type not supported
at the evaluation of y_true_mask_f
. I know the loss function has to be differentiable, and greater_equal
and cast
are not. But, is there a way to circumvent this in Keras? Perhaps using some workaround in Tensorflow?