There is a cross-entropy in one of the answers here: nolearn for multi-label classification, i.e.:
# custom loss: multi label cross entropy
def multilabel_objective(predictions, targets):
epsilon = np.float32(1.0e-6)
one = np.float32(1.0)
pred = T.clip(predictions, epsilon, one - epsilon)
return -T.sum(targets * T.log(pred) + (one - targets) * T.log(one - pred), axis=1)
Why is this specifically multi-label? It looks a lot like log-loss for univariate (single-class) classification. I found this in the literature, http://arxiv.org/pdf/1312.5419v3.pdf.