I've gone through all my code and if it really is the problem then I'm not sure how it has eluded me. It's too long to post so I'll tell you my problem, what ive looked at for fixing and if you have any ideas what else I can search for I'd be very appreciative!
Ok, so firstly the weights are initialised with mean zero and variance equal to 1/ square root of the number of inputs for that neuron, as instructed by haykin.
I've fed it a simple sine wave to learn on first. The weights in the hidden layer seem to converge to give the same output for each neuron within that layer... which makes the output neuron give a nearly fixed output.
So, what could be the cause? Firstly I checked if the network learning rate was causing it to get stuck in local minima and increased it, and also tried with & without momentum. I found it rectified the problem somewhat, as the network DOES produce the sine wave. However, not properly! :(
The network output has an amplitude roughly a third the height from the centre axis up, and doesnt go below. It looks kind of like you've picked the sine wave up, squished it a third and raised it to sit with it's lowest peaks on the axis. Furthermore the top peaks are all flat...
I since tried changing the network topology, whereby if I add another hidden neuron (total 3) it suddenly only gives a fixed output.