0

what I want to do seems like it should be very simple, however I'm finding it quite complex. What I want to do is feed in the input data to the loss function of my model. I have tried the three approaches listed here:

Custom loss function in Keras based on the input data

The only one that seems to "work" (i.e., runs) is this one:

def customloss(x):
    def loss(y_true, y_pred):
        # Use x here as you wish
        err = K.mean(K.square(y_pred - y_true), axis=-1)
        return err

    return loss

Which is then fed into the model via

model.compile('sgd', customloss(x))

The way I'm handling x and y is I am generating them on the fly (numpy arrays) and feeding them into the model. I believe that since I compile my model with x, even when I change x later this is not being properly transferred to the loss function. Also, when selecting a particular batch the x in the loss must match the corresponding y, which I'm also not sure is maintained. The alternative solution in the previous link seemed like a good idea, where you overload y to contain x. In my case its a regression problem so x is size (batch,1200,2), while y is size (batch,3). I tried making y size (batch,1200,3), and then storing both x and actual y in this, however I then got an error when I compile my model because the y size no longer matches the output layer (i.e., it expects (batch,3), but it got (batch,1200,3)). Anyone have any other ideas how to feed in input values into loss function?

user2551700
  • 89
  • 1
  • 6

0 Answers0