0

I am implementing perceptron learning algorithm in python and unable to decide if I need to add a value of 1 to each training data or to the bias when working on weights.

For example if the training data is -

[7.627531214,2.759262235]
[5.332441248,2.088626775]
[6.922596716,1.77106367]
[8.675418651,-0.242068655]
[7.673756466,3.508563011]

Do I need to add value of 1 to the training data as below and why?-

[7.627531214,2.759262235,1]
[5.332441248,2.088626775,1]
[6.922596716,1.77106367,1]
[8.675418651,-0.242068655,1]
[7.673756466,3.508563011,1]

Instead of adding value 1 to the training data, can I not add variable (for example bias), assign it value 1 to use it with weights. For example

min_weight = 0
max_weight = 5
bias = 1
weights = [bias, min_weight, max_weight]

Do we need to implement learning rate in perceptron and if yes then can I use delta rule and dotproduct method for learning rate in perception learning procedure?

  • Why would you want to add bias in the input? This makes no sense. This adds complexity and adds nothing to the learning progress of your network. Bias should be used 'inside' the network, not as an input. Check this out. http://stackoverflow.com/questions/2480650/role-of-bias-in-neural-networks . Bias can be added in two ways: 1. for each neuron: `this.output = this.activate(input) + this.bias` or 2. you can add a `bias neuron` which has no input. – Thomas Wagenaar Apr 20 '17 at 09:43
  • Yes, it makes more sense now. But why do we need to add value of 1 to the training data? – Mike Randor Apr 20 '17 at 18:39
  • Well thats what im saying: you should not add 1 to the training data. It had no effect at all, the bias is added INSIDE the network. This causes activation functions to shift, making them more suitable to your case. – Thomas Wagenaar Apr 20 '17 at 18:48

0 Answers0