1

I'd like to use Caffe to extract image features. However, it takes too long to process an image, so I'm looking for ways to optimize for speed.

One thing I noticed is that the network definition I'm using has four extra layers on top the one from which I'm reading a result (and there are no feedback signals, so they should be safe to delete).

I tried to delete them from the definition file but it had no effect at all. I guess I might need to remove the corresponding part of the file that contains pre-trained weights, too. That is, however, a binary file (a protobuffer) so editing it is not that easy.

Do you think that removing the four layers might have a profound effect of the net performance?

If so then how do I get familiar with the file contents so that I could edit it and how do I know which parts to remove?

Shai
  • 111,146
  • 38
  • 238
  • 371
Dušan Rychnovský
  • 11,699
  • 8
  • 41
  • 65

3 Answers3

2

first, I don't think removing the binary weights will have any effect.
Second, you can do it easily using the python interface: see this tutorial.
Last but not least, have you tried running caffe time to measure the performance of your net? this may help you identify the bottlenecks of your computations.

PS, You might find this thread relevant as well.

Community
  • 1
  • 1
Shai
  • 111,146
  • 38
  • 238
  • 371
1

Caffemodel stores data as key-value pair. Caffe only copies weight for those layers (in train.prototxt) having exactly same name as caffemodel. Hence I don't think removing binary weights will work. If you want to change network structure, just modify train.prototxt and deploy.txt.

If you insist to remove weights from binary file, follow this caffe example.

And to make sure you delete right part, this visualizing tool should help.

Lyken
  • 628
  • 6
  • 11
0

I would retrain on a smaller input size, change strides, etc. However if you want to reduce file size, I'd suggest quantizing the weights https://github.com/yuanyuanli85/CaffeModelCompression and then using something like lzma compression (xz for unix). We do this so we can deploy to mobile devices. 8 bit weights compress nicely.

Joel Teply
  • 3,260
  • 1
  • 31
  • 21