(An update to this question has been added.)
I am a graduate student at the university of Ghent, Belgium; my research is about emotion recognition with deep convolutional neural networks. I'm using the Caffe framework to implement the CNNs.
Recently I've run into a problem concerning class imbalance. I'm using 9216 training samples, approx. 5% are labeled positively (1), the remaining samples are labeled negatively (0).
I'm using the SigmoidCrossEntropyLoss layer to calculate the loss. When training, the loss decreases and the accuracy is extremely high after even a few epochs. This is due to the imbalance: the network simply always predicts negative (0). (Precision and recall are both zero, backing this claim)
To solve this problem, I would like to scale the contribution to the loss depending on the prediction-truth combination (punish false negatives severely). My mentor/coach has also advised me to use a scale factor when backpropagating through stochastic gradient descent (sgd): the factor would be correlated to the imbalance in the batch. A batch containing only negative samples would not update the weights at all.
I have only added one custom-made layer to Caffe: to report other metrics such as precision and recall. My experience with Caffe code is limited but I have a lot of expertise writing C++ code.
Could anyone help me or point me in the right direction on how to adjust the SigmoidCrossEntropyLoss and Sigmoid layers to accomodate the following changes:
- adjust the contribution of a sample to the total loss depending on the prediction-truth combination (true positive, false positive, true negative, false negative).
- scale the weight update performed by stochastic gradient descent depending on the imbalance in the batch (negatives vs. positives).
Thanks in advance!
Update
I have incorporated the InfogainLossLayer as suggested by Shai. I've also added another custom layer that builds the infogain matrix H
based on the imbalance in the current batch.
Currently, the matrix is configured as follows:
H(i, j) = 0 if i != j
H(i, j) = 1 - f(i) if i == j (with f(i) = the frequency of class i in the batch)
I'm planning on experimenting with different configurations for the matrix in the future.
I have tested this on a 10:1 imbalance. The results have shown that the network is learning useful things now: (results after 30 epochs)
- Accuracy is approx. ~70% (down from ~97%);
- Precision is approx. ~20% (up from 0%);
- Recall is approx. ~60% (up from 0%).
These numbers were reached at around 20 epochs and didn't change significantly after that.
!! The results stated above are merely a proof of concept, they were obtained by training a simple network on a 10:1 imbalanced dataset. !!