When I run the following code in the interpreter vs a script I receive different results. I'm trying to understand why. Initially I thought it was an issue with the numpy.mean() function.
num_friends=[29, 74, 41, 29, 81, 47, 48, 69, 33, 17, 13, 65, 14, 71, 76, 41, 22, 11, 57, 38, 78, 30, 53, 82, 59, 89, 57, 70, 16, 44, 75, 48, 35, 49, 12, 97, 85, 16, 85, 55, 64, 59, 94, 79, 91, 65, 12, 56, 33, 33, 33, 79, 46, 30, 51, 90, 84, 79, 11, 48, 56, 90, 45, 99, 57, 64, 35, 56, 84, 45, 69, 42, 56, 33, 31, 98, 97, 12, 10, 85, 96, 83, 16, 55, 36, 10, 52, 44, 43, 56, 27, 23, 95, 25, 44, 38, 17, 94, 97, 25]
print(np.sum(num_friends)/len(num_friends))
print(np.mean(num_friends))
I generated this list randomly. This is the only set I was able to create the error with. When I recreated the list with a new random.randrange(10,100) list the issue disappeared.
Results in the interpreter:
52.880000000000003
Results using a .py file:
52.88
I get the same odd results with SciPy. Thoughts?