0

I have a kind of euclidean loss function which is:

\sum_{i,j} c_i*max{0,y_{ji}-k_{ji}} + p_i*max{0,k_{ji}-y_{ji}}

which y_{ji} are the output of caffe and k_{ji} are the real output value, i is the index of the items and j is index of samples.

The issue is about getting the values of parameters c_i and p_i.

When I have c_i = c_q for all i \neq q, and similarly for p_i, I simply get the values of them as parameters of the loss layer (I added two new parameters in the caffe.proto). However, the problems is that now I have around 300 items so that it is not reasonable to get them as loss layer parameters. I tried to get their values in the loss layer, I mean I tried to add another bottom layer for loss layer, but it gave an error. I am stuck here!

Please guide me how I can solve this issue.

Thanks in advance, Afshin

Afshin Oroojlooy
  • 1,326
  • 3
  • 21
  • 43
  • look at `"InfogainLoss"` layer where you have an additional matrix of parameters. See [this thread](http://stackoverflow.com/q/27632440/1714410) for example. – Shai Aug 25 '16 at 07:31
  • @Shai Thanks for the comment. I looked at the layer, but it can be used with a output of range [0,1], which is not my case here. I have a continues output. One approach could be scaling the output, but in this way I will lose accuracy. – Afshin Oroojlooy Aug 25 '16 at 14:44
  • I was not expecting you to use `"infogainLoss"`, but rather see how getting the weights is implemented there: either as an additional `"bottom"`, or as a binaryproto file with the weights written in it. – Shai Aug 25 '16 at 14:46
  • @Shai I see, I found this [link](https://github.com/gustavla/caffe-weighted-samples/blob/master/src/caffe/layers/softmax_loss_layer.cpp), I think it can help. But, compared to tensorflow it requires a lot of work to do a simple change. :| – Afshin Oroojlooy Aug 25 '16 at 15:28
  • you can always write your loss layer as a `"Python"` layer... – Shai Aug 25 '16 at 15:29

0 Answers0