Let X
be the features and Y
be the response. For simplicity, let the dimensions of X
and Y
be 1000X12
and 1000X1
, which means that there are 1000 observations and 12 features with one response. Each observation is associated with a weight
. Besides, we use 800 observations for the training and 200 observations for the test. I want to train a relation between X
and Y
, but the loss function is adjusted by the weight. Mathematically, the loss function is sum wi(Yi-est_Yi)^2
. Here, wi
is the weight of unit i
, Yi
is the observed response of unit i
, est_Yi
is the estimated response of unit i
.
We set the epochs = 300. Each time I would like to use 64 observations to update the parameters. Below is my code:
def custom_loss(weights):
def loss(y_true, y_pred):
delta = y_pred - y_true
return K.mean(weights * K.square(delta))
return loss
def MLP(predictors, response, weights):
tf.random.set_seed(1)
input_dim = predictors.shape[1]
model = Sequential()
model.add(Dense(10, input_shape=(input_dim,)))
model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
model.add(Dense(10, activation='selu', kernel_regularizer=regularizers.l2(0.2)))
model.add(Dense(1, activation='linear'))
callbacks = [EarlyStopping(monitor='val_loss', patience=10)]
opt = Adam(lr=0.001)
model.compile(loss = custom_loss(weights), optimizer = opt)
model.summary()
model.fit(predictors, response, epochs=300, batch_size=64, validation_split=0.2, shuffle=False, callbacks=callbacks)
return model
model = MLP(features, response, weights)
I obtain an error message: InvalidArgumentError: Incompatible shapes: [800,1] vs. [64,1]. I guess the problem is that I pass all the weights to the custom_loss. I find difficulties since I cannot get the weights of the observations which are used for training. Any suggestions? I appreciate any answers on my problem.