1

Hello I have a question regarding my neural network. I use a neural network with two hidden layers with activation functions of tansig and purelin. I have 4 inputs and one output. I trained my network with 25 neurons in layer 1 and 1 neuron in layer 2. Here is the schematic vie of my system enter image description here

There is a good agreement between my ANN outputs and my targets enter image description here

But after training the network when I want to derive out the equations of the system strange thing happens

I find the weights and bias of the system as follow

b1=First layer bias= net.b{1}
b2=Second layer bias=net.b{2}
Firstlayer weight net.IW{1,1}
Secondlayer weight net.LW{2,1}

And when I use this formula:

Y=b2+secondlayer weight*tanh((firstlayerweight*X+b1))

(Where X is my input) there is a large difference between my measured data and results of this formula (like the outputs I get are 1000 times larger than they should be).

Can anyone help me with this? Is this formula correct?

Edit: Here are my weight and bias matrices:

%first layer weights(4*25):
111.993     48.59439    1.604747    -0.76531
10.8245     -107.793    173.7258    -123.2
-149.48     -105.421    102.6071    -79.2133
226.2926    -621.099    2440.947    776.9815
-66.5572    -116.593    -121.067    -55.3703
-9.1293     1.251525    2.716534    -0.3201
0.786728    -1.81314    -5.21815    1.619898
71.98442    -3.67078    -17.8482    5.911515
6.986454    -36.9755    -21.4408    1.50746
0.654341    -5.25562    10.34482    4.589759
0.304266    1.645312    5.004313    -1.51475
0.721048    -0.02945    -0.09663    -0.0004
60.96135    1.182228    4.733361    -0.40264
-1.58998    1.920395    5.533581    -1.71799
0.410825    0.056715    4.564767    -0.1073
2.298645    9.923646    82.42445    -8.89119
-2.46618    -1.59946    -3.41954    -2.68133
0.089749    -1.69783    -5.02845    1.541547
3.516938    0.358534    -10.9719    -0.33401
2.392706    -1.99236    -5.89471    1.815673
1.605158    4.174882    4.227769    -3.14685
-25.2093    -1.68014    -5.249      1.163255
52.30716    -67.498     87.13013    29.61436
9.195869    2.328432    -7.59047    -1.42355
3.120973    1.266733    8.182079    0.365206

%first layer biases(25):
47.07941005
-49.66890557
80.2745463
1251.640193
-228.1521936
-2.905218293
-2.802770641
52.59183653
-50.96014049
7.931255305
3.652513556
-0.125595632
40.47792689
2.343663068
1.712611331
67.61242524
-6.124504787
-3.288283849
-4.752833412
-1.921129618
6.818490015
-6.481531096
5.056644951
1.984717285
7.050001634

%second layer weights(25):
-97.96825145    122.9670213     -124.5566672    -0.046986176    -0.021205068    -5.990650681    1850.804141     2.964275181     -0.333027823    -0.070420859    -583.478589     -68.16211954    12.59658596     1257.471165     -138.4739514    0.07786882      0.238340539     -1546.523224    -2.751936791    363.5612929 -0.152249267    -20.71141251    0.094593198     -0.473042306    5.533999251

%second layer bias(1):
21.92849

For example when I put X=[8;400;5;9.5] I expect to get y = 20.8 But by using the formula

y=secondbias +secondlayer weight* tanh(firstlayer weight*X+first bias)

I get -111 for y which is strange

Mikhail_Sam
  • 10,602
  • 11
  • 66
  • 102
V_shr
  • 39
  • 5
  • Could you give the Matlab code you used to evaluate the formula? It looks likes you have put pseudo-code above, so it is hard to tell what might be wrong. The nature of the multiplications is important. If you could also give one X, the weights, and expected value of Y for that, it would help people check their answers before posting – Neil Slater Nov 25 '14 at 10:25
  • I replicated your result of -111.89 for y given your inputs and formula. – Neil Slater Nov 26 '14 at 13:37
  • Are perhaps normalising your training data prior to training the NN? If so you also need to alter `X` using the same values extracted from the training set (e.g. if normalised to mean 0, sd 1, you need to use mean and sd from training set) before using the network to make a prediction. – Neil Slater Nov 26 '14 at 13:43
  • 1
    Yep, it was the normalization... thanks. – V_shr Dec 06 '14 at 03:43

0 Answers0