I am implementing perceptron learning algorithm in python and unable to decide if I need to add a value of 1 to each training data or to the bias when working on weights.
For example if the training data is -
[7.627531214,2.759262235]
[5.332441248,2.088626775]
[6.922596716,1.77106367]
[8.675418651,-0.242068655]
[7.673756466,3.508563011]
Do I need to add value of 1 to the training data as below and why?-
[7.627531214,2.759262235,1]
[5.332441248,2.088626775,1]
[6.922596716,1.77106367,1]
[8.675418651,-0.242068655,1]
[7.673756466,3.508563011,1]
Instead of adding value 1 to the training data, can I not add variable (for example bias), assign it value 1 to use it with weights. For example
min_weight = 0
max_weight = 5
bias = 1
weights = [bias, min_weight, max_weight]
Do we need to implement learning rate in perceptron and if yes then can I use delta rule and dotproduct method for learning rate in perception learning procedure?