0

I have created my own lmdb database consist of (non-image) 2D numpy lists values of data varies between 0 to MAX_INT.

sample size is 1500*75, and the original is 15000*77 ( very slow to train). I have 2400 samples divided 5/6 for training and 1/6 testing. my net include 2 classes only (0,1).

The network isn't learning at all, I keep getting the same values over and over again, loss is either 0 or 87.3366 and accuracy is always 0.495 from iter 0 until iter 20K.

I've tried every possible solution, adjusting parameters, deepening the network, changing the whole network ! what am I doing wrong?

Z.Kal
  • 426
  • 4
  • 18
  • use `debug_info: true`: http://stackoverflow.com/q/40510706/1714410 – Shai Dec 20 '16 at 09:23
  • I followed your notes about things i should be looking for, none of them is there, non nan's no values less than e+8 , and no zero gradient, yet it's not learning ! – Heba Alawneh Jan 04 '17 at 04:41
  • It is very difficult from your description to understand and figure out what exactly is wrong. Have you tried [Batch Normalization](http://stackoverflow.com/q/41269570/1714410)? – Shai Jan 04 '17 at 06:31
  • It's not working either, I created a simpler version of my dataset, trying to figure out, where the error is, and i noticed that the learning behavior is exactly the same ! noting that the fake data is supposed to be very straight forward to learn, any ideas of what the reason might be ? i used this code to create my lmdb file: http://deepdish.io/2015/04/28/creating-lmdb-in-python/ – Heba Alawneh Jan 18 '17 at 04:21
  • http://stackoverflow.com/questions/42216766/strange-training-behavior-of-nn-using-caffe – Heba Alawneh Feb 14 '17 at 05:58

0 Answers0