0
  • I read in a book, where the author mentioned that the bias bk is used to produce an affine transform to the output uk (The summation of weighted input signals).

  • Also, the author mentioned that due to this bias that gives a constant value of, say 'k' makes the neuron not connected to the previous layer.

I am in a confused state. Can someone please tell me what the above two points mean, and if there are any other uses of a bias to the network?

Thanks in advance!

1 Answers1

-1

If the neurons activation is z(a) = wa + b, b is the bias. It's a bias because the larger it is, the more this neuron is biased, or in other words it doesnt care much about what was passed to it (a) from the last layer. I'm assuming the second point is referring to the fact that if a bias is large enough (positive or negative) it is like the neuron no longer cares what is passed to it, it's always going to pass the same thing to the next layer. I would need to see it in context to be certain about what the author is saying, but overall you just need to understand that it is a constant that can add bias (doesnt care about what the last layer gave it). Dont fret too much about its implications though, because the learning (or optimization) process is going to adjust these automatically so you're not going to have to choose proper bias values for the network. As you become more familiar with the concepts it will start to make more sense

Adam Johnston
  • 1,399
  • 2
  • 12
  • 23