I'm trying to implement Backprogation for a 1-S-1 (S=number of neurons) network to approximate the following function within an input (p) limit :
The approximation should be achieved by using Numpy only with matrix operation. I will set the initial weights and biases to be random numbers uniformly distributed between -0.5 and 0.5 to train the network. I will also need to set neurons to 2 or 10 to observe the difference/improvement of the network. Learning rate will also be flexible. It can be set to any number that allows for convergency.
The transfer function I'm using is sigmoid:
output = 1 / (1 + e^(-activation))
I'm still pretty rough on understanding the concept of backpropagation and is trying to learn by programming from scratch. However, it is extremely difficult when I'm still confused.
I understand that backpropagation updates weight and bias by layers from sets of output and targets. {(p1,t1),(p2,t2)....}. The weight and bias are then updated through error estimation such as:
error = (expected - output) * transfer_derivative(output)
In the following program, I tried to plot the approximated function by backpropagation, but I do not know how to calculate for the approximated output for the approximated sine graph. Any clarification would be extremely appreciated. The code I have now maybe incomplete because I stopped at weight propagation, and would like some help from there on.
Just trying to learn it on my own and feels like this is the most difficult concept to grasp at the moment.
class backProg:
def __init__(self,P):
self.W1 = np.random.uniform(-1/2, 1/2, 20)
#self.W2 = np.random.uniform(-1/2, 1/2, 20)
self.B = np.random.uniform(-1/2, 1/2, 20)#include bias in input
def transfer_derivative(self, output):
deriv1 = output * (1.0 - output)
return deriv1
def logsig(self, p):
for i in range(len(p)):
w1 = np.zeros(len(self.W1))
a1 = w1
s2 = w1
x = self.W1[i]*p[i]+self.B[i]
a1[i] = 1/(1+np.exp(x))
deriv = self.transfer_derivative(a1)
t = 1 + np.sin((math.pi/2)*p[i])
#Second layer
s2[i] = -2*deriv*(t-a1[i])
w1[i] = self.W1[i] - 0.1*s2[i]*a1[i] #a1
#First layer
return w1, t, s2,a1
P = np.arange(-2,2,0.2)
backP = backProg(P)
backP.logsig(P)
Trying to approximate this sine graph:
p = np.arange(-2,2,0.2)
t = 1 + np.sin((math.pi/2)*p)
plt.plot(p,t,'C0')