4

The problem I've encountered after trying to train neural networks isn't a new one : The fitted values I'm getting are all the same. Here's some oversimplified code as an example:

a <- c( 123, 223, 234, 226, 60)  
b <- c(60, 90, 53, 54, 91)  
d <- c(40,100,207,290,241)  
q <- cbind(a,b,d)  
nn <- neuralnet(a~b+d,data=q,hidden=2,threshold=0.01,err.fc="sse")  
nn$net.result`

Previous answers I have stumbled upon suggest using nnet instead. I am getting the same results though, unless I set the decay argument to a value not equal to 0. Instead of blindly using the decay option, just because it seems to "work" though, I would appreciate understanding what goes wrong with my neuralnet model to begin with.

csgillespie
  • 59,189
  • 14
  • 150
  • 185
Nishi
  • 71
  • 5

1 Answers1

3

So, after playing around with my original data set using both neuralnet and nnet, I found out what the problem is. It's about the randomly chosen initial weights. The range of values that neuralnet assigns to them leads to this weird solution. However, when I tried to use the startweights statement to manually set the starting weights to values I got from nnet (which returned appropriate fitted values there), I got an "algorithm did not converge" error. So I guess I will just have to give up on neuralnet's plots and stick to nnet.

Nishi
  • 71
  • 5