0

I'm a total beginner regarding A.N.N.s. I understand the concept and all but there's no straight explanation as to why the input is a series of 0s and 1s and the output also a series of 0s and 1s.

I read here on Neural networks - input values that you can encode the input with a data normalization function so that it's converted to a number between 0 and 1.

Is this the case or am I misunderstanding things?

Also do you think you could point me in the right direction regarding which article/ lecturing material I should pick up to clear things out?

Community
  • 1
  • 1
tudor balus
  • 149
  • 2
  • 10
  • I don't think that output is always binary - for example you can use some multilayer perceptron with softmax on output layer to classify with some probability. Same goes with inputs - nothing disallows us to use real values here. – Filip Malczak Jul 31 '15 at 07:53

2 Answers2

1

I'm just relearning nets now, and asked a similar question.

It's hard to know what your exact scenario is, but for me, the activations were always in the range of 0-1 because my activation function was the sigmoid function, which always outputs in the range of 0-1 (although you'll need to ask a math oriented person why that is).

Say you're using a simple step function instead for your activation function. That will likely also only logically take a 0 or 1 (but nothing in between), and will also output a 0 or 1.

So the answer seems to be: the range of activations in a net is defined by the activation function used.

My similar question.

Community
  • 1
  • 1
Carcigenicate
  • 43,494
  • 9
  • 68
  • 117
  • I'd have 4 input values from 0 to 999 each and I'd probably want to have an output value a string or number that's again between 0 and 999. – tudor balus Jul 27 '15 at 11:12
  • @Tudor I have yet to try ways around this. I'm actually currently writing a net that is based on graphs, so hopefully I can test it soon. My plan when I got to that point was just to scale the numbers. If you divided your input by 1000, it would be in range. Then you would just need to multiply by 1000 on the output end. – Carcigenicate Jul 27 '15 at 11:29
  • Thank you! Will definitely try this when I get home. – tudor balus Jul 27 '15 at 12:41
  • @tudorbalus Np. Tell me how it works out. My nets is getting out of control, so I might need to do a refactor. I'm curious if scalling the input/output would work, but I won't be able to test it any time soon. – Carcigenicate Jul 27 '15 at 12:46
  • Worked like a charm! Thank you very much! – tudor balus Jul 27 '15 at 16:58
  • @tudorbalus Excellent. Good to know for future reference, and good to hear. If you think my post helped you, would you mind accepting it? Unless you wanted to wait for another potential answer. – Carcigenicate Jul 27 '15 at 17:27
0

As you mentioned by yourself you can put everything into the ann encoded between 0 and 1. For an easy entry in java and ANN you can find a lot of librarys. For example: NEUROPH

Neuron fires at a threshhold which is normally between 0 and 1.

So use this library and play around with a simple net and read some basics litrature.

For example: This paper

Am_I_Helpful
  • 18,735
  • 7
  • 49
  • 73
MrT
  • 594
  • 2
  • 17
  • Actually this whole question stems in the fact that I saw this limitation in the Neuroph API and then everywhere around the net. :D Ok, will take a look at the paper and try what Carcigenicate suggested: scaling down by dividing with 1000. Thanks for the suggestion! – tudor balus Jul 27 '15 at 12:39