1

I am wondering how to go about setting up ONLY a test phase in Caffe for an LMDB file. I have already trained my model, everything seems good, my loss has decreased, and the output I am getting on images loaded in one by one also seem good.

Now I would like to see how my model performs on a separate LMDB test set, but seem to be unable to do so successfully. It would not be ideal for me to do a loop by loading images one at a time since my loss function is already defined in caffe and this would require me to redefine it.

this is what I have so far, but the results of this dont make sense; when I compare the loss I have from the train set to the loss I get from this, they don't match (orders of magnitude apart). Does anyone have any idea what my problem could be?

caffe.set_device(0)
caffe.set_mode_gpu()

net = caffe.Net('/home/jeremy/Desktop/caffestuff/JP_Kitti/all_proto/mirror_shuffle/deploy_JP.prototxt','/home/jeremy/Desktop/caffestuff/JP_Kitti/all_proto/mirror_shuffle/snapshot_iter_10000.caffemodel',caffe.TEST)


solver = None  # ignore this workaround for lmdb data (can't instantiate two solvers on the same data)
solver = caffe.SGDSolver('/home/jeremy/Desktop/caffestuff/JP_Kitti/all_proto/mirror_shuffle/lenet_auto_solverJP_test.prototxt')
niter = 100
test_loss = zeros(niter)
count = 0
for it in range(niter):
    solver.test_nets[0].forward()  # SGD by Caffe

    # store the test loss

    test_loss[count] = solver.test_nets[0].blobs['loss']
    print(solver.test_nets[0].blobs['loss'].data)
    count = count+1
jerpint
  • 399
  • 2
  • 16

1 Answers1

0

See my answer here. Do not forget to subtract the mean, otherwise you'll get low accuracy. The link to the code, posted above, takes care of that.

Community
  • 1
  • 1
Harsh Wardhan
  • 2,110
  • 10
  • 36
  • 51
  • thanks, but it seems to me like there should be a simpler way. In your code, you need to manually specify to the net how to sift through the data going through it all one by one. Yet during training phase, for the exact same data structure, caffe takes care of it all automatically. Do you know of a way to do something similar to that? – jerpint Jun 28 '16 at 15:07
  • Just as a follow up, using techniques similar to yours, it is taking a really really long time to go through my test set, which is significantly smaller than my train set. I suppose that caffe's structure is optimized to be highly parallelized when doing this type of computation, this is why I am trying to avoid bottlenecking it – jerpint Jun 29 '16 at 01:47
  • Did you try this ``caffe test -model=models/bvlc_reference_caffenet/train_val.prototxt --weights=models/bvlc_reference_caffenet/.caffemodel --iterations=6400 --gpu=0`` ? – Harsh Wardhan Jun 29 '16 at 04:54
  • Not yet, but I will give it a go, how can I store results if I go about using the terminal directly? – jerpint Jun 29 '16 at 14:39
  • 1
    Just append ``>> outfile.txt`` after the above command. And this is a basic linux thing. – Harsh Wardhan Jun 29 '16 at 14:41