0

So I've got a function that sums a set of numbers which are to 1 decimal place, and the resultant number is to something like 18 decimal places.

The set of data is:

0.7
0.7
0.2

The function that loops through the data and adds them is this:

def test_function( ind ):
    utility=0
    for i in range(N):
        utility = utility + ind.gene[i]
        print(utility)
    return utility

When it adds 0.7 and 0.7, it gets 1.4. This is fine. But then it adds 0.2 to the 1.4 and gets 1.5999999999999999.

Why does this happen?

Conor
  • 23
  • 3
  • This is a display issue, and the way floats work. To understand, try the following in the python shell: print("%100.100f" % (0.7)) ==> 0.6999999999999999555910790149937383830547332763671875000...; print("%100.100f" % (0.7+0.7)) ==> 1.399999999999999911182158029987476766109466552734375000...; print("%100.100f" % (0.7+0.7+0.2)) ==> 1.5999999999999998667732370449812151491641998291015625000... – Andrew Nov 02 '22 at 13:14

0 Answers0