6

First I want to say that I'm really new to neural networks and I don't understand it very good ;)

I've made my first C# implementation of the backpropagation neural network. I've tested it using XOR and it looks it work.

Now I would like change my implementation to use resilient backpropagation (Rprop - http://en.wikipedia.org/wiki/Rprop).

The definition says: "Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight".

Could somebody tell me what partial derivative over all patterns is? And how should I compute this partial derivative for a neuron in hidden layer.

Thanks a lot

UPDATE:

My implementation base on this Java code: www_.dia.fi.upm.es/~jamartin/downloads/bpnn.java

My backPropagate method looks like this:

public double backPropagate(double[] targets)
    {
        double error, change;

        // calculate error terms for output
        double[] output_deltas = new double[outputsNumber];

        for (int k = 0; k < outputsNumber; k++)
        {

            error = targets[k] - activationsOutputs[k];
            output_deltas[k] = Dsigmoid(activationsOutputs[k]) * error;
        }

        // calculate error terms for hidden
        double[] hidden_deltas = new double[hiddenNumber];

        for (int j = 0; j < hiddenNumber; j++)
        {
            error = 0.0;

            for (int k = 0; k < outputsNumber; k++)
            {
                error = error + output_deltas[k] * weightsOutputs[j, k];
            }

            hidden_deltas[j] = Dsigmoid(activationsHidden[j]) * error;
        }

        //update output weights
        for (int j = 0; j < hiddenNumber; j++)
        {
            for (int k = 0; k < outputsNumber; k++)
            {
                change = output_deltas[k] * activationsHidden[j];
                weightsOutputs[j, k] = weightsOutputs[j, k] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumOutpus[j, k];
                lastChangeWeightsForMomentumOutpus[j, k] = change;

            }
        }

        // update input weights
        for (int i = 0; i < inputsNumber; i++)
        {
            for (int j = 0; j < hiddenNumber; j++)
            {
                change = hidden_deltas[j] * activationsInputs[i];
                weightsInputs[i, j] = weightsInputs[i, j] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumInputs[i, j];
                lastChangeWeightsForMomentumInputs[i, j] = change;
            }
        }

        // calculate error
        error = 0.0;

        for (int k = 0; k < outputsNumber; k++)
        {
            error = error + 0.5 * (targets[k] - activationsOutputs[k]) * (targets[k] - activationsOutputs[k]);
        }

        return error;
    }

So can I use change = hidden_deltas[j] * activationsInputs[i] variable as a gradient (partial derivative) for checking the sing?

Rafal Spacjer
  • 4,838
  • 2
  • 26
  • 34
  • I've spent yesterday evening on debugging my implementation and I'm getting concerns that I don't understand this algorithm. Do you know any good description of it? – Rafal Spacjer May 20 '10 at 08:44

2 Answers2

3

I think the "over all patterns" simply means "in every iteration"... take a look at the RPROP paper

For the paritial derivative: you've already implemented the normal back-propagation algorithm. This is a method for efficiently calculate the gradient... there you calculate the δ values for the single neurons, which are in fact the negative ∂E/∂w values, i.e. the parital derivative of the global error as function of the weights.

so instead of multiplying the weights with these values, you take one of two constants (η+ or η-), depending on whether the sign has changed

king_nak
  • 11,313
  • 33
  • 58
  • Would you be so kind and look at my code (above) and tell me if I'm thinking correct – Rafal Spacjer May 19 '10 at 14:40
  • Yes, the change value is the partial derivative. According to its sign change, another factor is used to update the weight-change (refer to eq. 4-7 in the paper i've linked as there are some more rules... The ∂E/∂w values are your change variables) – king_nak May 20 '10 at 13:24
  • I think that http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html explains the idea of back propagation quite good. http://www.learnartificialneuralnetworks.com/backpropagation.html is a more mathematical description of how and why it works – king_nak May 20 '10 at 13:52
1

The following is an example of a part of an implementation of the RPROP training technique in the Encog Artificial Intelligence Library. It should give you an idea of how to proceed. I would recommend downloading the entire library, because it will be easier to go through the source code in an IDE rather than through the online svn interface.

http://code.google.com/p/encog-cs/source/browse/#svn/trunk/encog-core/encog-core-cs/Neural/Networks/Training/Propagation/Resilient

http://code.google.com/p/encog-cs/source/browse/#svn/trunk

Note the code is in C#, but shouldn't be difficult to translate into another language.

Waleed Al-Balooshi
  • 6,318
  • 23
  • 21
  • Thx, I will try to review this solution – Rafal Spacjer May 19 '10 at 12:35
  • I have a follow-on question that I've [posted here](http://stackoverflow.com/questions/12146986/part-2-resilient-backpropagation-neural-network). It's just me trying to get a clear understanding of how the partial derivative works for a NN. Any insights appreciated. – Nutritioustim Aug 27 '12 at 17:55