1

I saw What's the triplet loss back propagation gradient formula? For gradient, anchor is n-p, positive is p-a and negative is a-n.
But from 80 line to 92 line of tripletLossLayer is different to this.
Namely, for gradient anchor is p - n and positive is p - a. which is really right?

Community
  • 1
  • 1

1 Answers1

0

Lines 80-92 in triplet_loss_layer.cpp are part of forward_cpu function - that is the actual loss computation and NOT the gradient computation.

The gradient is computed in backward_cpu, where you can see that each bottom is assigned its diff according to the derivation presented here.

Community
  • 1
  • 1
Shai
  • 111,146
  • 38
  • 238
  • 371
  • 1
    thanks for your help. I am training the tripletloss layer for face verification. GoogleNet or VggNet talked performance is boosted using this layer, but i am not. I want your help – guochan zhang Mar 21 '16 at 08:08