I've recently started implementing a feed-forward neural network and I'm using back-propagation as the learning method. I've been using http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html as a guide.
However, after just the first epoch, my error is 0. Before using the network for my real purpose I've tried with the simple network structure:
- 4 binary inputs, 1, 1, 0, 0.
- 2 hidden layers, 4 neurons each.
- 1 output neuron, 1.0 should = valid input.
Each training epoch runs the test input (1, 1, 0, 0), calculates the output error (sigmoid derivative * (1.0 - sigmoid)), back propagates the error and finally adjusts the weights.
Each neuron's new weight = weight + learning_rate * the neuron's error * the input to the weight.
Each hidden neuron's error = (sum of all output neuron's error * connected weight) * the neuron's sigmoid derivative.
The issue is that my learning rate has to be 0.0001 for me to see any sort of 'progress' between the epochs in terms of lowering the error. In this case, the error starts around ~30.0. Any greater learning rate and the error results in 0 after the first pass, and thus results in false positives.
Also when I try this network with my real data (a set of 32 audio features from sample - 32 neurons per hidden layer) - I get the same issue. To the point where any noise will trigger a false positive. Possibly this could be an input feature issue, but as I'm testing using a high pitch note I can clearly see the raw data differs from a low pitch one.
I'm a neural networks newbie, so I'm almost positive the issue is with my network. Any help would be greatly appreciated.