1

I am trying to do pixel-wise classification with caffe, so need to provide a ground truth image the size of the input image. There is several ways of doing this, and I decided to set up my input as a 4-channel LMDB (according to the 2nd point of this answer). This requires me to add a Slice layer after my input, which is also outlined in the same answer.

I keep getting Unknown blob input data_lmdb to layer 0 as an error message (data_lmdb is supposed to be my very bottom input layer). I found that unknown blob (be it top or bottom) error is mostly caused by forgetting to define something in one of the TRAIN / TEST phases while defining it in the other (e.g. this question, or this one). But, I am using a combination of train.prototxt, inference.prototxt and solver.prototxt files that I have previously used, just replacing the input layers from HD5 to LMDB (for a bit of practice), so everything should be defined.

Can anybody see why I am getting the Unknown blob input data_lmdb to layer 0 error? From the train log files I can see that it crashes as soon as it reads the train.prototxt file (it doesn't even reach the Creating layer part).

My prototxt files are as follows:

solver.prototxt

net: "train.prototxt"       # Change this to the absolute path to your model file
test_initialization: false
test_iter: 1
test_interval: 1000000
base_lr: 0.01
lr_policy: "fixed"
gamma: 1.0
stepsize: 2000
display: 20
momentum: 0.9
max_iter: 10000
weight_decay: 0.0005
snapshot: 100
snapshot_prefix: "set_snapshot_name"    # Absolute path to output solver snapshots
solver_mode: GPU

train.prototxt (first two layers only; they are followed by a LNR normalization layer and then a Convolution layer):

name: "my_net"
layer {
  name: "data_lmdb"
  type: "Data"
  top: "slice_input"
  data_param {
    source: "data/train"
    batch_size: 4
    backend: LMDB
  }
}
layer{
  name: "slice_input"
  type: "Slice"
  bottom: "data_lmdb" # 4-channels = rgb+truth
  top: "data"
  top: "label"
  slice_param {
    axis: 1
    slice_point: 3  
  }
}

The first few layer definitions in inference.prototxt are identical to train.prototxt (which shouldn't matter anyway as it is not used in training) except the following:

  • in data_lmdb the source path is different (data/test)
  • in data_lmdb layer uses batch_size: 1

Please do let me know if I need to include any more information or layers. I was trying to keep it brief, which didn't really work out in the end.

penelope
  • 8,251
  • 8
  • 45
  • 87

1 Answers1

1

The message Unknown blob input points on non-existent blob that some layer wants to have as input. Your slice_input layer specified data_lmdb as input blob, but there is no such a blob in your network. Instead, you have a layer with such a name. Blob names are defined by the top field, which is slice_input in this case.

You shoud either change top: "slice_input" to top: "data_lmdb" in your data_lmdb layer, or use bottom: "slice_input" # 4-channels = rgb+truth.

However, for more clear naming I would offer you the following:

name: "my_net"
layer {
  name: "data"
  type: "Data"
  top: "data_and_label"
  data_param {
    source: "data/train"
    batch_size: 4
    backend: LMDB
  }
}
layer{
  name: "slice_input"
  type: "Slice"
  bottom: "data_and_label" # 4-channels = rgb+truth
  top: "data"
  top: "label"
  slice_param {
    axis: 1
    slice_point: 3  
  }
}
Dmytro Prylipko
  • 4,762
  • 2
  • 25
  • 44
  • Oh this explains so much, thank you. Apparently I need to go and re-read some of the introductory materials for caffe; I wasn't aware that layers and the blobs they define are different; probably since people tend to name them the same. – penelope Jan 31 '19 at 15:13