0

In Tensorflow 2.0, I'm trying to build a model that classifies my objects onto two categories: positives and negatives.

I want to use tf.keras.metrics.FalsePositives and tf.keras.metrics.FalseNegatives metrics to see how the model improves with every epoch. Both of these metrics have assertions stipulating: [predictions must be >= 0] and [predictions must be <= 1].

The problem is that an untrained model can generate an arbitrary number as a prediction. But even a trained model can sometimes produce an output slightly above 1 or slightly below 0.

Is there any way to disable these assertions?

Alternatively, is there any suitable activation function that forces the model outputs into [0, 1] range without causing any problems with the learning rate?

stephen_mugisha
  • 889
  • 1
  • 8
  • 18
Volodymyr Frolov
  • 1,256
  • 5
  • 16
  • 25
  • The ```sigmoid``` activation function is a suitable alternative if outputs must be in the range ```[0, 1]```. – stephen_mugisha Oct 04 '19 at 19:45
  • In my case, technical specification says that I must use `tanh`, so I'm limited to using `tanh` only. – Volodymyr Frolov Oct 04 '19 at 20:01
  • 1
    @stephen_mugisha, but that's a good idea in general. I can add the `sigmoid` layer just for the purpose of validation and then remove it in production. Could you please add it as an answer so that I can accept it. – Volodymyr Frolov Oct 04 '19 at 20:05
  • sure..sometimes you may also choose to use other activation functions in your hidden layers but use sigmoid for the output layer so that the range of outputs is between 0 and 1. – stephen_mugisha Oct 04 '19 at 20:12

1 Answers1

1

The sigmoid activation function is a suitable alternative if outputs must be in the range [0, 1] as it also ranges from 0 t0 1.

stephen_mugisha
  • 889
  • 1
  • 8
  • 18