I am trying to do pixel-wise classification with caffe, so need to provide a ground truth image the size of the input image. There is several ways of doing this, and I decided to set up my input as a 4-channel LMDB (according to the 2nd point of this answer). This requires me to add a Slice
layer after my input, which is also outlined in the same answer.
I keep getting Unknown blob input data_lmdb to layer 0
as an error message (data_lmdb
is supposed to be my very bottom input layer). I found that unknown blob
(be it top or bottom) error is mostly caused by forgetting to define something in one of the TRAIN / TEST phases while defining it in the other (e.g. this question, or this one). But, I am using a combination of train.prototxt
, inference.prototxt
and solver.prototxt
files that I have previously used, just replacing the input layers from HD5 to LMDB (for a bit of practice), so everything should be defined.
Can anybody see why I am getting the Unknown blob input data_lmdb to layer 0
error? From the train log files I can see that it crashes as soon as it reads the train.prototxt
file (it doesn't even reach the Creating layer
part).
My prototxt
files are as follows:
solver.prototxt
net: "train.prototxt" # Change this to the absolute path to your model file
test_initialization: false
test_iter: 1
test_interval: 1000000
base_lr: 0.01
lr_policy: "fixed"
gamma: 1.0
stepsize: 2000
display: 20
momentum: 0.9
max_iter: 10000
weight_decay: 0.0005
snapshot: 100
snapshot_prefix: "set_snapshot_name" # Absolute path to output solver snapshots
solver_mode: GPU
train.prototxt
(first two layers only; they are followed by a LNR
normalization layer and then a Convolution
layer):
name: "my_net"
layer {
name: "data_lmdb"
type: "Data"
top: "slice_input"
data_param {
source: "data/train"
batch_size: 4
backend: LMDB
}
}
layer{
name: "slice_input"
type: "Slice"
bottom: "data_lmdb" # 4-channels = rgb+truth
top: "data"
top: "label"
slice_param {
axis: 1
slice_point: 3
}
}
The first few layer definitions in inference.prototxt
are identical to train.prototxt
(which shouldn't matter anyway as it is not used in training) except the following:
- in
data_lmdb
the source path is different (data/test
) - in
data_lmdb
layer usesbatch_size: 1
Please do let me know if I need to include any more information or layers. I was trying to keep it brief, which didn't really work out in the end.