0

I'm trying to implement FCN-8s using my own custom data. While training, from scratch, I see that my loss = -nan.

Could someone suggest what's going wrong and how can I correct this? My solver.prototxt is as follows:-

The train_val.prototxt is the same as given in the link above. My custom images are of size 3x512x640 and labels of 1x512x640.There are 11 different types of labels.

net: "/home/ubuntu/CNN/train_val.prototxt"
test_iter: 13

test_interval: 500
display: 20
average_loss: 20
lr_policy: "fixed"

base_lr: 1e-4

momentum: 0.99

iter_size: 1
max_iter: 3000
weight_decay: 0.0005
snapshot: 200
snapshot_prefix: "train"
test_initialization: false
Abhilash Panigrahi
  • 1,455
  • 1
  • 13
  • 31
  • what values of loss you see before it becomes `nan`? – Shai Nov 18 '15 at 17:35
  • Possible duplicate of [Common causes of nans during training](http://stackoverflow.com/questions/33962226/common-causes-of-nans-during-training) – Shai Dec 02 '15 at 11:57

0 Answers0