0

I have loss function which implemented for siamese network. In Keras if you have to build your own loss function, it should only take input arguments as (y_true, y_pred). But in my case I have y_pred1, y_pred1, y_true1(class_label), y_true2(class_label), y_true3(similarity label)

So my solution is to concatenate what I have like:

def my loss ( y_true, y_pred):
    y_true1 = y_true[:, 0]
    y_true2 = y_true[:, 1]
    label = y_true[:, 2]

    y_pred1 = y_pred[:, 0]
    y_pred2 = y_pred[:, 1]

The second problem is, I have one parameter (alpha) which is a function of current epoch number that I should pass it to the loss function also.

In general , if you have to pass some another argument you can use the wrapper function as the solution suggested here.

But it will not help me in my case , because my alpha should be change depending on the current number of epoch. It is basically the Sigmoied function of the current epoch.

The only way that I can track the epoch number is inside my own generator, because I have dataset built in tfrecords. So I am using my own generator to feed the data to model.

So any one have any idea what should I do? How I can track the current epoch number and use it.

W. Sam
  • 818
  • 1
  • 7
  • 21

1 Answers1

2

Important! Which of these is your case?

  • Case 1: A model with 3 outputs
  • Case 2: A model with one output that is a concatenation of three outputs?

Examples Without alpha

Case 1

You need three independent loss functions, each function will see only its own y_true and y_pred.

def loss1(yTrue,yPred):
    ...
def loss2(yTrue,yPred):
    ...
def loss3(yTrue,yPred):
    ...

model.compile(loss=[loss1,loss2,loss3],...)

Case 2

In this case, you will be able to do it the way you proposed.

def my loss ( y_true, y_pred):
    y_true1 = y_true[:, 0]
    y_true2 = y_true[:, 1]

    y_pred1 = y_pred[:, 0]
    y_pred2 = y_pred[:, 1]

Using alpha

Alpha must be a "tensor", not an ordinary var:

alpha = K.variable(someInitialNpArray, dtype=...)

The value of alpha must be "changed", not reassigned:

K.set_value(alpha, newValues)

Now, create a LambdaCallback for on_epoch_end in order to change the value of alpha:

def changeAlpha(epoch,logs):
    #maybe use epoch+1, because it starts with 0
    K.set_value(alpha, valuesBasedOn(epoch))

alphaChanger = LambdaCallback(on_epoch_end=changeAlpha) #or on_epoch_begin (or start?)

Loss:

def loss(true,pred):
    #blablabla

    #you can use alpha here

Training:

model.fit(..... callbacks = [alphaChanger])
model.fit_generator(......, callbacks = [alphaChanger])
Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • Thank you for your replay. My model is with two separate output, but the loss function depend on the two output, so I don't want to use two losses, I will stack the two output along dimension 0 then I will use one loss function and the solution of the second case you mentioned it. And many thanks to you for your suggestion related the alpha parameter. I was so confused since I am new to Keras and I was to return back to pure tensor flow code because of this problem. I will try to implement it. – W. Sam Jun 21 '18 at 20:47
  • @Ddaniel Mooller . Another related question if you may. If I want to get the values of features after each epoch in order to use it in the ( t-sne) for visualization, Do I need to create LambdaCallback for this purpose? – W. Sam Jun 22 '18 at 01:41
  • I don't know that the t-sne is, but probably yes. You must "model.predict" in order to get the results. – Daniel Möller Jun 22 '18 at 02:04
  • You seemed too qualified for custom loss function, can you look at my question here, it is kind of similar issue with some additional complication. I will appreciate if you can look at it . https://stackoverflow.com/questions/52269603/multiple-losses-for-imbalanced-dataset-with-keras – W. Sam Sep 11 '18 at 17:31