I understand that binary cross-entropy is the same as categorical cross-entropy in case of two classes.
Further, it is clear for me what softmax is.
Therefore, I see that categorical cross-entropy just penalizes the one component (probability) that should be 1.
But why, can't or shouldn't I use binary cross-entropy on a one-hot vector?
Normal Case for 1-Label-Multiclass-Mutual-exclusivity-classification:
################
pred = [0.1 0.3 0.2 0.4]
label (one hot) = [0 1 0 0]
costfunction: categorical crossentropy
= sum(label * -log(pred)) //just consider the 1-label
= 0.523
Why not that?
################
pred = [0.1 0.3 0.2 0.4]
label (one hot) = [0 1 0 0]
costfunction: binary crossentropy
= sum(- label * log(pred) - (1 - label) * log(1 - pred))
= 1*-log(0.3)-log(1-0.1)-log(1-0.2)-log(1-0.4)
= 0.887
I see that in binary cross-entropy the zero is a target class, and corresponds to the following one-hot encoding:
target class zero 0 -> [1 0]
target class one 1 -> [0 1]
In summary: Why do we just calculate/summarize the negative log likelihood for the predicted class. Why don't we penalize the other SHOULD-BE-ZERO-/NOT-THAT-CLASS classes?
In case one uses binary cross-entropy to a one-hot vector. Probabilities to expected zero labels would be penalized too.