I have seen one can define a custom loss layer for example EuclideanLoss in caffe like this:
import caffe
import numpy as np
class EuclideanLossLayer(caffe.Layer):
"""
Compute the Euclidean Loss in the same manner as the C++
EuclideanLossLayer
to demonstrate the class interface for developing layers in Python.
"""
def setup(self, bottom, top):
# check input pair
if len(bottom) != 2:
raise Exception("Need two inputs to compute distance.")
def reshape(self, bottom, top):
# check input dimensions match
if bottom[0].count != bottom[1].count:
raise Exception("Inputs must have the same dimension.")
# difference is shape of inputs
self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
# loss output is scalar
top[0].reshape(1)
def forward(self, bottom, top):
self.diff[...] = bottom[0].data - bottom[1].data
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.
def backward(self, top, propagate_down, bottom):
for i in range(2):
if not propagate_down[i]:
continue
if i == 0:
sign = 1
else:
sign = -1
bottom[i].diff[...] = sign * self.diff / bottom[i].num
However, I have a few question regarding that code:
If I want to customise this layer and change the computation of the loss in this line:
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.
Lets say to:
channelsAxis = bottom[0].data.shape[1]
self.diff[...] = np.sum(bottom[0].data, axis=channelAxis) - np.sum(bottom[1].data, axis=channelAxis)
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.
How do I have to change the backward function? For EuclideanLoss it is:
bottom[i].diff[...] = sign * self.diff / bottom[i].num
How does it have to look for my described loss?
What is the sign for?