0

I'm trying to implement Backprogation for a 1-S-1 (S=number of neurons) network to approximate the following function within an input (p) limit :

enter image description here

The approximation should be achieved by using Numpy only with matrix operation. I will set the initial weights and biases to be random numbers uniformly distributed between -0.5 and 0.5 to train the network. I will also need to set neurons to 2 or 10 to observe the difference/improvement of the network. Learning rate will also be flexible. It can be set to any number that allows for convergency.

The transfer function I'm using is sigmoid:

output = 1 / (1 + e^(-activation))

I'm still pretty rough on understanding the concept of backpropagation and is trying to learn by programming from scratch. However, it is extremely difficult when I'm still confused.

I understand that backpropagation updates weight and bias by layers from sets of output and targets. {(p1,t1),(p2,t2)....}. The weight and bias are then updated through error estimation such as:

error = (expected - output) * transfer_derivative(output)

In the following program, I tried to plot the approximated function by backpropagation, but I do not know how to calculate for the approximated output for the approximated sine graph. Any clarification would be extremely appreciated. The code I have now maybe incomplete because I stopped at weight propagation, and would like some help from there on.

Just trying to learn it on my own and feels like this is the most difficult concept to grasp at the moment.

class backProg:
def __init__(self,P):
    self.W1 = np.random.uniform(-1/2, 1/2, 20)

    #self.W2 = np.random.uniform(-1/2, 1/2, 20)
    self.B = np.random.uniform(-1/2, 1/2, 20)#include bias in input

def transfer_derivative(self, output):
    deriv1 = output * (1.0 - output)
    return deriv1

def logsig(self, p):
    for i in range(len(p)):
        w1 = np.zeros(len(self.W1))
        a1 = w1
        s2 = w1
        x = self.W1[i]*p[i]+self.B[i]
        a1[i] = 1/(1+np.exp(x))
        deriv = self.transfer_derivative(a1)
        t = 1 + np.sin((math.pi/2)*p[i])
        #Second layer
        s2[i] = -2*deriv*(t-a1[i])
        w1[i] = self.W1[i] - 0.1*s2[i]*a1[i] #a1
    #First layer
        return w1, t, s2,a1

    P = np.arange(-2,2,0.2)

    backP = backProg(P)
    backP.logsig(P)

Trying to approximate this sine graph:

 p = np.arange(-2,2,0.2)

 t = 1 + np.sin((math.pi/2)*p)

 plt.plot(p,t,'C0')
lydias
  • 841
  • 1
  • 14
  • 32
  • 1
    What's the specific question? – Mad Physicist Jul 30 '18 at 03:32
  • @MadPhysicist I'm trying to approximate 1+sin(pi/2)*p and to plot the approximated graph on 1+sin(pi/2)*p. With initial conditions such as n=2 and 10. – lydias Jul 30 '18 at 03:36
  • Again, what is the question? You told us what you are trying to do, not what we asked, such as a specific problem or issue. – Dr. Snoopy Jul 30 '18 at 10:02
  • I tried running your code. Firstly: these three lines `P = np.arange(-2,2,0.2)`, `backP = backProg(P)` and `backP.logsig(P)` should be outside the `class` (formatting issue). The problem comes in the following line `s2[i] = -2*deriv*(t-a1[i])` where the `t` and `a1[i]` are of type float but `deriv` is a `` with a length of `20`. So you are assigning a space for single number a whole array of values. Hence the error I got on running your code is `ValueError: setting an array element with a sequence`. – Sheldore Jul 30 '18 at 10:11
  • Read here for more on this error: [https://stackoverflow.com/questions/4674473/valueerror-setting-an-array-element-with-a-sequence] – Sheldore Jul 30 '18 at 10:13

0 Answers0