I'm running a function in python 3 that's counting values as floats, but is returning a value of 0 when they're divided as if they're integers.
Function:
def precision_recall(predictions, results):
#values initiated as floats for later division
tp, fp, fn, tn, i = 0.0, 0.0, 0.0, 0.0, 0
while i < len(results):
if predictions[i] == 1 and results[i] == 1:
tp = tp + 1
elif predictions[i] == 1 and results[i] == 0:
fp = fp + 1
elif predictions[i] == 0 and results[i] == 0:
tn = tn + 1
else:
fn = fn + 1
i = i + 1
precision = tp / (tp + fp)
recall = tn / (tn + fn)
f1 = precision*recall / (precision + recall)
print("Precision: %d, Recall: %d, f1: %d" % (precision, recall, f1))
print(tp, fp, fn, tn)
The variables predictions and results are lists filled with values of 0 or 1.
This function returns the following output:
Precision: 0, Recall: 0, f1: 0
(113.0, 34.0, 23.0, 187.0)
Ie, the division is returning 0, even though the values used for it are floats.
Furthermore, if I run the same formula for Precision on the command line everything runs fine:
113.0 / (113.0 + 34.0)
Out[25]: 0.7687074829931972
I'm writing this script with numpy, pandas, and sklearn, but I don't believe any of these should change the way this operation ought to happen.
I'm also using Python 3, and my understanding is that it automatically processes division as floats.
Furthermore, if I make this modification:
precision = tp / float(tp + fp)
I get the same results.
I've tried
from __future__ import divsion
and this hasn't changed anything either.
Does anybody know what could be wrong?
None of the answers in this response work for me.