1

Ok, I had a previous question about using the SPP Layer in caffe. This question is a subsequent to the previous one.

When using the SPP Layer I get the error output below. It seems that the images are getting too small when reaching the spp layer? The images I use are small. The width ranges between 10 and 20 px and height ranges between 30 and 35px.

I0719 12:18:22.553256 2114932736 net.cpp:406] spatial_pyramid_pooling <- conv2
I0719 12:18:22.553261 2114932736 net.cpp:380] spatial_pyramid_pooling -> pool2
F0719 12:18:22.553505 2114932736 pooling_layer.cpp:74] Check failed: pad_w_ < kernel_w_ (1 vs. 1) 
*** Check failure stack trace: ***
    @        0x106afcb6e  google::LogMessage::Fail()
    @        0x106afbfbe  google::LogMessage::SendToLog()
    @        0x106afc53a  google::LogMessage::Flush()
    @        0x106aff86b  google::LogMessageFatal::~LogMessageFatal()
    @        0x106afce55  google::LogMessageFatal::~LogMessageFatal()
    @        0x1068dc659  caffe::PoolingLayer<>::LayerSetUp()
    @        0x1068ffd98  caffe::SPPLayer<>::LayerSetUp()
    @        0x10691123f  caffe::Net<>::Init()
    @        0x10690fefe  caffe::Net<>::Net()
    @        0x106927ef8  caffe::Solver<>::InitTrainNet()
    @        0x106927325  caffe::Solver<>::Init()
    @        0x106926f95  caffe::Solver<>::Solver()
    @        0x106935b46  caffe::SGDSolver<>::SGDSolver()
    @        0x10693ae52  caffe::Creator_SGDSolver<>()
    @        0x1067e78f3  train()
    @        0x1067ea22a  main
    @     0x7fff9a3ad5ad  start
    @                0x5  (unknown)
Shai
  • 111,146
  • 38
  • 238
  • 371
TruckerCat
  • 1,437
  • 2
  • 16
  • 39
  • it seems like you are correct. you are using too small images. you might consider padding the conv layers you are using, or avoid pooling to keep the intermediate feature maps large enough. – Shai Jul 19 '17 at 12:01

1 Answers1

0

I was correct, my images were to small. I changed my net and it worked. I removed one conv layer and replaced the normal pool layer with the spp layer. I also had to set my test batch size to 1. Accuracy were very high, but my F1 Score went down. I dont know if this is related to the small test batch Size I had to use.


Net:

name: "TessDigitMean"
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/train_lmdb"
    batch_size: 1 #64
    backend: LMDB
  }
}
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/test_lmdb"
    batch_size: 1
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    pad_w: 2
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

layer {
  name: "spatial_pyramid_pooling"
  type: "SPP"
  bottom: "conv1"
  top: "pool2"
  spp_param {
    pyramid_height: 2
  }
} 
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
TruckerCat
  • 1,437
  • 2
  • 16
  • 39