4

I am trying to use a neural network to solve a problem. I learned about them from the Machine Learning course offered on Coursera, and was happy to find that FANN is a Ruby implementation of neural networks, so I didn't have to re-invent the airplane.

However, I'm not really understanding why FANN is giving me such strange output. Based on what I learned from the class,

I have a set of training data that's results of matches. The player is given a number, their opponent is given a number, and the result is 1 for a win and 0 for a loss. The data is a little noisy because of upsets, but not terribly so. My goal is to find which rating gaps are more prone to upsets - for instance, my intuition tells me that lower-rated matches tend to entail more upsets because the ratings are less accurate.

So I got a training set of about 100 examples. Each example is (rating, delta) => 1/0. So it's a classification problem, but not really one that I think lends itself to a logistic regression-type chart, and a neural network seemed more correct.

My code begins

training_data = RubyFann::TrainData.new(:inputs => inputs, :desired_outputs => outputs)

I then set up the neural network with

network = RubyFann::Standard.new(
  :num_inputs=>2, 
  :hidden_neurons=>[8, 8, 8, 8], 
  :num_outputs=>1)

In the class, I learned that a reasonably default is to have each hidden layer with the same number of units. Since I don't really know how to work this or what I'm doing yet, I went with the default.

network.train_on_data(training_data, 1000, 1, 0.15)

And then finally, I went through a set of sample input ratings in increments and, at each increment, increased delta until the result switched from being > 0.5 to < 0.5, which I took to be about 0 and about 1, although really they were more like 0.45 and 0.55.

When I ran this once, it gave me 0 for every input. I ran it again twice with the same data and got a decreasing trend of negative numbers and an increasing trend of positive numbers, completely opposite predictions.

I thought maybe I wasn't including enough features, so I added (rating**2 and delta**2). Unfortunately, then I started getting either my starting delta or my maximum delta for every input every time.

I don't really understand why I'm getting such divergent results or what Ruby-FANN is telling me, partly because I don't understand the library but also, I suspect, because I just started learning about neural networks and am missing something big and obvious. Do I not have enough training data, do I need to include more features, what is the problem and how can I either fix it or learn how to do things better?

Andrew Latham
  • 5,982
  • 14
  • 47
  • 87
  • 1
    I don't know much about ML but biased samples cause biased results, if most of your sample data is for wins, it will likely predict/result in a win when you use your model. – iouri Oct 07 '12 at 21:23
  • It's more wins than losses, but why are the results so different in different runs? – Andrew Latham Oct 07 '12 at 22:10

1 Answers1

1

What about playing a little with parameters? At first I would highly recommend only two layers..there should be mathematical proof somewhere that it is enough for many problems. If you have too many neurons your NN will not have enough epochs to really learn something.. so you can also play with number of epochs as well as gama..I think that in your case it's 0.15 ..if you use a little bigger value your NN should learn a little bit faster(don't be afraid to try 0.3 or even 0.7), right value of gama usually depends on weight's intervals or input normalization.

Your NN shows such a different results most probably because in each run there is new initialization and then there is totally different network and it will learn in different way as the previous one(different weights will have higher values so different parts of NN will learn same things).

I am not familiar with this library I am just writing some experiences with NN. Hope something from these will help..

LadyWoodi
  • 486
  • 1
  • 5
  • 12