0

Let X be the features and Y be the response. For simplicity, let the dimensions of X and Y be 1000X12 and 1000X1, which means that there are 1000 observations and 12 features with one response. Each observation is associated with a weight. Besides, we use 800 observations for the training and 200 observations for the test. I want to train a relation between X and Y, but the loss function is adjusted by the weight. Mathematically, the loss function is sum wi(Yi-est_Yi)^2. Here, wi is the weight of unit i, Yi is the observed response of unit i, est_Yi is the estimated response of unit i.

We set the epochs = 300. Each time I would like to use 64 observations to update the parameters. Below is my code:

def custom_loss(weights):
    def loss(y_true, y_pred):
        delta = y_pred - y_true
        return K.mean(weights * K.square(delta))
    return loss
def MLP(predictors, response, weights):
    tf.random.set_seed(1)
    input_dim = predictors.shape[1]
    model = Sequential()
    model.add(Dense(10, input_shape=(input_dim,)))
    model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
    model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
    model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
    model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
    model.add(Dense(1, activation='linear'))
    callbacks = [EarlyStopping(monitor='val_loss', patience=10)]
    opt = Adam(lr=0.001)

    model.compile(loss = custom_loss(weights), optimizer = opt)
    model.summary()

    model.fit(predictors, response, epochs=300, batch_size=64, validation_split=0.2, shuffle=False, callbacks=callbacks)
    return model
model = MLP(features, response, weights)

I obtain an error message: InvalidArgumentError: Incompatible shapes: [800,1] vs. [64,1]. I guess the problem is that I pass all the weights to the custom_loss. I find difficulties since I cannot get the weights of the observations which are used for training. Any suggestions? I appreciate any answers on my problem.

will_cheuk
  • 379
  • 3
  • 12
  • https://stackoverflow.com/a/62402699/10375049 – Marco Cerliani Dec 21 '21 at 08:33
  • I am not sure if the given post can solve my problem. Here, I build a MLP model but the post it does not. Besides, the loss function in the post is an input of three. From my knowledge, the loss function accepts two inputs only when the model is neural network. – will_cheuk Dec 21 '21 at 09:13
  • if u want to build a custom loss with sample weights you have to pass the weights as input in order to be splitted into batches... It doesn't matter the network structure. The answer is super generalizable. Pay attention, the focus is that u should use model.add_loss – Marco Cerliani Dec 21 '21 at 09:19
  • I understand your point. The points that confuse me a lot is the add_loss(true, out, weights) in the post. I guess add_loss() should be placed before model.compile(). The true is response. But what is the out? The out should be the model.fit which is placed afterwards. I am not sure if the answer in the post is helpful. – will_cheuk Dec 21 '21 at 09:25
  • it's add_loss(my_loss(true, out, weights)) which returns the loss value to be minimized during fitting – Marco Cerliani Dec 21 '21 at 09:28
  • Yes, you are right. I am sorry for my typo. But still the same question. In my case, I should amend my current custom_loss to custom_loss(true, out, weights) (I skip the contents), and add the line model.add_loss(custom_loss(response, out, weights)). The out should be the estimation during training process. I still do not know how can I amend this part. Your out seems to be a global variable but in my situation, out is not and should be obtained during each training. I am sorry for any inconvenience caused as I am very new about python and training. – will_cheuk Dec 21 '21 at 09:40
  • Frankly, your post gives me insight, but I am not sure how can I make use of it and extend to my problem. I wonder if I can have full code for the problem. I am really sorry about it. – will_cheuk Dec 21 '21 at 09:52

0 Answers0