I'm wondering how the Accuracy metrics in TensorFlow/Keras calculates if a given input matches the expected prediction, or, in other words, how it determines the predicted number of the net.
Example 1:
Output: [0, 0, 0.6]
, expected output: [0, 0, 1]
I assume the 0.6 is just rounded to 1, correct? Or it is seen as the only number greater than 0.5, hence it's the predicted number.
But, if so, then consider Example 2:
Output: [0.6, 2, 0.1]
, expected output: [1, 0, 0]
I know, such an output is not possible with softmax
which would be the default choice here. But it would be possible with other activation functions.
Is here now just the greatest number "extracted" and taken as the prediction? So 2
, what would be a false prediction.
Example 3:
Output: [0.1, 0, 0.2]
, expected output: [0, 0, 1]
Since every number in output is less than 0.5, I'd guess that the accuracy-calculator would see this output as [0, 0, 0]
, so also not a correct prediction. Is that correct?
If my preceding assumptions are correct, then would be the rule as follows?
Every number less than 0.5
is a 0
in terms of prediction, and from the numbers greater than 0.5
or equal to 0.5
I choose the greatest one. The greatest one then represents the predicted class.
If that would be so, then accuracy can be only used for classifications with only one corresponding correct class (so e.g. there can't be an expected output like [1, 0, 1]
)?