2

I am trying to implement a custom loss function that takes in "weight maps". The weight map consists of an area of the image in which i want a higher weight applied example of the weight

I have tried to implement this into the loss function by creating dummy variables in the model creation as inputs. Since the weights are matched to the samples I cannot just pass it in as a variable outside of the model.

when creating the model I create 2 inputs, inputs and the weights all called in a seperate function that builds the model

    Inputs  = Input(input_size)
    Weights = Input(input_size)
    .... other model creation code...
    conv10 = Conv2D(n_classes, 1, padding='same')(conv9)
    
    # Define the model
    model = tf.keras.Model(inputs=[Inputs, Weights, true], outputs=conv10)
    return model, Inputs, true, Weights, conv10

I then get the variables through calling the unet function

unet, inputs, true, weights, out = UNetCompiled(input_size=(128,128,1), n_filters=32, n_classes=2)

And I have defined the loss as below

def halo_binary_crossentropy(y_true, y_pred, weight):
    Y = tf.cast(y_true, tf.float32)
    W = tf.cast(weights, tf.float32)
    gaus = tfa.image.gaussian_filter2d(Y,(30,30),(20,20),padding="CONSTANT", constant_values=0)
    fin = tf.clip_by_value(gaus, clip_value_min=tf.reduce_mean(gaus), clip_value_max = tf.reduce_max(gaus))    weight_map = (fin-tf.reduce_min(fin))/ (tf.reduce_max(fin)-tf.reduce_min(fin))
    weight_map = weight_map + W
    a = tf.reduce_mean((tf.keras.losses.binary_crossentropy(y_true, y_pred)*weight_map[:,:,:,0]))
    b = tf.reduce_sum(weight_map) + tf.keras.backend.epsilon()
    return  a/b 

Then I compile and try to fit the model using the "dummy" variables to initialize the loss. and try to run the model.

unet.compile(optimizer=tf.keras.optimizers.Adam(), 
             loss = halo_binary_crossentropy(true, out, weights),
             metrics=['accuracy'])


checkpoint = ModelCheckpoint(model_filepath, monitor='loss', 
                              save_best_only=True, verbose=1,  mode='min')
callbacks_list = [checkpoint]
print("data prepared, ready to train!")
############################################################################
# Fit the model
history = unet.fit(x = [x_train, w_train, y_train], y = None, batch_size=BATCH_SIZE, 
                    epochs=EPOCHS, verbose=1, callbacks=callbacks_list,
                    validation_split=0.1, shuffle=True) 

I get this error

Traceback (most recent call last):

  File "E:\Synth_training_base_new\train_newSynData_newUnetWeights.py", line 199, in <module>
    loss = halo_binary_crossentropy(true, out, weights),

  File "E:\Synth_training_base_new\train_newSynData_newUnetWeights.py", line 188, in halo_binary_crossentropy
    gaus = tfa.image.gaussian_filter2d(Y,(30,30),(20,20),padding="CONSTANT", constant_values=0)

  File "C:\Users\samuel.chambers\Anaconda3\envs\New_GPU\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None

  File "C:\Users\samuel.chambers\Anaconda3\envs\New_GPU\lib\site-packages\keras\engine\keras_tensor.py", line 255, in __array__
    raise TypeError(

TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(None, 128, 128, 1), dtype=tf.float32, name=None), name='Placeholder:0', description="created by layer 'tf.cast_6'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.

Does anyone know of a way to implement weight maps into a loss function or of there is something obviously wrong im doing. I dont know how else to pass in the weights and keep them tied to the inputs and labels of the model. I was trying to adapt this solution from a previous Post https://stackoverflow.com/a/62402699/16816707

____________________________________________ edit _____________________________________________________

I might have found a solution in case anyone looks at this. Instead of trying to pass in the weights tied to the data, (x_train) Passing in the weights tied to the labels would work better and already package themselves into the loss function. This means i will also have to change the accuracy in order to get correct calculations

y_test = [labels, weights] # [len_data, 2]
x_test = data              # [len_data, 1]

def loss(y_true, y_pred):
    weights = y_true[:,1]
    labels = y_true[:,0]

I will update with a better and more fulfilled response if this works

0 Answers0