I have a bit of a problem implementing a soft cross entropy loss in pytorch.
I need to implement a weighted soft cross entropy loss for my model, meaning the target value is a vector of probabilities as well, not hot one vector.
I tried using the kldivloss as suggested in a few forums, but it does not expect a weight vector so I can not use it.
In general I'm a bit confused about how to create a custom loss function with pytorch and how auto grad follows a custom loss function, especially if after the model we apply some function which is not a mathematical, like mapping the output of the model to some vector and calculating the loss on the mapped vector and etc.