1

I don't consider my question a duplicate of this one since I already have a bias in my implementation.

I try to implement a perceptron and its training of recognizing a linear slope in Erlang. The problem is that it is not properly trained. The values it guesses are still about 50% correct after 50 epochs.

The starting weights are supplied in a list [X_weight, Y_Weight, Bias_weight] and the training set is supplied in a another list [X,Y,Desired_guess] where X and Y are integers and Desired_guess is -1 if the coordinate is under the line or 1 if it is over the line.

First is the calculation of the new weights:

% Exported starting clause
% Inputs are - List of input values for one perceptron ([X,Y,Bias]), A list of weights corresponding to the inputs [X_weight, Y_weight, Bias_weight], the learning constant and the error (Desired-Guess)

train_perceptron([InputsH|InputsT], [WeightsH|WeightsT], Learning_constant, Error) ->
    train_perceptron(InputsT, WeightsT, Learning_constant, Error, 
        [WeightsH + (Learning_constant * Error) * InputsH]).

% Not exported clause called by train_perceptron/4 This also has a list of the new adjusted weights.
% When the tail of input lists are empty lists it is the last value, and thereby the Bias
train_perceptron([InputsH|[]], [WeightsH|[]], Learning_constant, Error, Adjusted_weights) ->
    train_perceptron([], [], Learning_constant, Error,
        Adjusted_weights ++ [WeightsH + Learning_constant * Error]);

%Normal cases, calcualting the new weights and add them to the Adjusted_weights
train_perceptron([InputsH|InputsT], [WeightsH|WeightsT], Learning_constant,      Error, Adjusted_weights) ->
    train_perceptron(InputsT, WeightsT,Learning_constant, Error, 
    Adjusted_weights ++ [WeightsH + (Learning_constant * Error) * InputsH]);

%Base case the lists are empty, no more to do. Return the Adjusted_weights
train_perceptron([], [],_, _, Adjusted_weights) ->
    Adjusted_weights.

This is the function that calls the train_perceptron function

line_trainer(Weights,[],_) ->
     Weights;
line_trainer(Weights, [{X,Y,Desired}|TST], Learning_constant)->
     Bias = 1,
     Error = Desired - feedforward([X,Y,Bias],Weights),
     Adjusted_weights = train_perceptron([X,Y,Bias], Weights, Learning_constant, Error),
     line_trainer(Adjusted_weights, TST, Learning_constant).

One solution, could be if someone supplied me with a training set for that kind of function three starting weights and outputs for each epoch. That could help me debug this myself.

Community
  • 1
  • 1
Einar Sundgren
  • 4,325
  • 9
  • 40
  • 59
  • could you review/edit your code, it is very hard to read, and the second line has a syntax error, and no effect if you suppress the extra ')' – Pascal Oct 14 '15 at 22:07
  • @Pascal You were right. A line fell off when I copy-pasted from my editor. Tried to format and comment the code for clarity. – Einar Sundgren Oct 15 '15 at 06:51

1 Answers1

0

This actually works. The traning set I supplied was to small. With a larger training set and about 20 epochs the gobal error converges to 0.

Einar Sundgren
  • 4,325
  • 9
  • 40
  • 59
  • 1
    Your train_perceptron/4 and train_perceptron/5 do reduce to a single 2 clause arity 4 function BTW: tp([IH], [WH|[]], LC, E) -> [WH + LC * E]; tp([IH|InT], [WH|WT], LC, E) -> [WH + (LC * E) * IH | tp(InT, WT, LC, E)]. – Michael Oct 21 '15 at 14:27
  • @Michael Thanks, a nice solution. It was a while since I did anything in a functional language. Need to improve my style to get away from this redundant sluggsihness I´m fumbling with now. – Einar Sundgren Oct 21 '15 at 14:54