I would like to define a custom cost function
def custom_objective(y_true, y_pred):
....
return L
that will depend not only on y_true
and y_pred
, but on some feature of the corresponding x
that produced y_pred
. The only way I can think of doing this is to "hide" the relevant features in y_true
, so that y_true = [usual_y_true, relevant_x_features]
, or something like that.
There are two main problems I am having with implementing this:
1) Changing the shape of y_true
means I need to pad y_pred
with some garbage so that their shapes are the same. I can do this by modyfing the last layer of my model
2) I used data augmentation like so:
datagen = ImageDataGenerator(preprocessing_function=my_augmenter)
where my_augmenter()
is the function that should also give me the relevant x
features to use in custom_objective()
above. However, training with
model.fit_generator(datagen.flow(x_train, y_train, batch_size=1), ...)
doesn't seem to give me access to the features calculated with my_augmenter
.
I suppose I could hide the features in the augmented x_train
, copy them right away in my model setup, and then feed them directly into y_true
or something like that, but surely there must be a better way to do this?