I've modified the Caffe MNIST example to classify 3 classes of image. One thing I noticed was that if I specify the number of output layers as 3, then my test accuracy drops horribly - down to the low 40% range. However, if I +1 and have 4 output layers, the result is in the 95% range.
I added an extra class of images to my dataset (so 4 classes) and noticed the same thing - if the number of output layers were the same as the number of classes, then the result was horrible, if it was the same +1, then it worked really well.
inner_product_param {
num_output: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
Does anyone know why this is? I've noticed that when I use the model I train with the C++ example code on an image from my test set then it will complain that I've told it that there are 4 classes present and I've only supplied labels for 3 in my labels file. If I invent a label and add it to the file, I can get the program to run, but then it just returns one of the classes with a probability of 1.0 no matter what image I give it.