0

I want to make a neural network that has one single output neuron on the last layer that tells me the probability of there being a car on an image (the probability going from 0 - 1).

I'm used to making neural networks for classification problems, with multiple output neurons, using the tf.nn.softmax_cross_entropy_with_logits() and tf.nn.softmax() methods. But these methods don't work when there is only one column for each sample in the labels matrix since the softmax() method will always return 1.

I tried replacing tf.nn.softmax() with tf.nn.sigmoid() and tf.nn.softmax_cross_entropy_with_logits() with tf.nn.sigmoid_cross_entropy_with_logits(), but that gave me some weird results, I might have done something wrong in the implementation.

How should I define the loss and the predictions on a neural network with only one output neuron?

  • 1
    Softmax functions are for multi-class problems, sigmoid is for binary classification - https://stackoverflow.com/q/47034888/712995 – Maxim Dec 28 '17 at 07:56

1 Answers1

1

Just use the sigmoid layer as the final layer. There's no need for any cross entropy when you have a single output, so just let the loss function work on the sigmoid output which is limited to the output range you want.

alkanen
  • 636
  • 6
  • 16