I am training AlexNet on my own data using caffe. One of the issues I see is that the "Train net output" loss and "iteration loss" are nearly the same in the training process. Moreover, this loss has fluctuations. like:
... ...Iteration 900, loss 0.649719 ... Train net output #0: loss = 0.649719 (* 1 = 0.649719 loss ) ... Iteration 900, lr = 0.001 ...Iteration 1000, loss 0.892498 ... Train net output #0: loss = 0.892498 (* 1 = 0.892498 loss ) ... Iteration 1000, lr = 0.001 ...Iteration 1100, loss 0.550938 ... Train net output #0: loss = 0.550944 (* 1 = 0.550944 loss ) ... Iteration 1100, lr = 0.001 ...
- should I see this fluctuation?
- As you see the difference between reported losses are not significant. Does it show a problem in my training?
my solver
is:
net: "/train_val.prototxt"
test_iter: 1999
test_interval: 10441
base_lr: 0.001
lr_policy: "step"
gamma: 0.1
stepsize: 100000
display: 100
max_iter: 208820
momentum: 0.9
weight_decay: 0.0005
snapshot: 10441
snapshot_prefix: "/caffe_alexnet_train"
solver_mode: GPU