Context
I would like to implement a custom loss function. Given the input and a predicted output there is a real life loss what can be calculated using the predicted output and some known real life facts which belongs to the input. I would prefer to use this real life meaning loss value as loss function instead of using any distance algorithm between the predicted output and expected output.
This real life loss for every given predicted output is between -10.0 and 50.0, where the higher the better, with other words this is the learning optimizing goal.
Question
What would Keras expect (or utilize in optimal way) as the loss function output? Should loss function output normalized between say between [1.0, 0.0]? Or just multiply [-10.0, 50.0] by -1 -> [-50.0, 10.0] and substract 10.0 -> [-60.0, 0.0]?
edit: I meant here: Or just multiply [-10.0, 50.0] by -1 -> [-50.0, 10.0] and add 50.0 -> [0.0, 60.0]?
Note
I am completely beginner in NN, so if I completely miss something here, just please point the right direction in the fewest words.