Just a general question on what sort of runtime differences I should be expecting between using these two different data types.
My test:
test = [100.0897463, 1.099999939393,1.37382829829393,29.1937462874847272,2.095478262874647474]
test2 = [decimal.Decimal('100.0897463'), decimal.Decimal('1.09999993939'), decimal.Decimal('1.37382829829'), decimal.Decimal('29.1937462875'), decimal.Decimal('2.09547826287')]
def average(numbers, ddof=0):
return sum(numbers) / (len(numbers)-ddof)
%timeit average(test)
%timeit average(test2)
The differences in runtime are:
1000000 loops, best of 3: 364 ns per loop
10000 loops, best of 3: 80.3 µs per loop
So using decimal was about 200 times slower than using floats. Is this type of difference normal and along the lines of what I should expect when deciding which data type to use?