0

tl;dr why output x * 0.1 != x / 10

full I am learning Pyhton and created a function that should iterate over 2 lists with step = 0.1 and search the smallest error from the given function

Full code here >> https://github.com/alexkis337/test_projects/blob/main/math_func.py

So with code

possible_ms = [x / 10 for x in range(-100, 101)]
possible_bs = [x / 10 for x in range(-200, 201)]

datapoints = [(1, 2), (2, 0), (3, 4), (4, 4), (5, 3)]

def smallest_error(ms, bs, datapoints):
    smallest_error = float('inf')
    best_m = 0
    best_b = 0
    for m in ms:
        for b in bs:
            error = calculate_all_error(m, b, datapoints)
            if error < smallest_error:
                smallest_error = error
                best_b = b
                best_m = m
    print(f'err is {smallest_error}, m is {best_m}, b  is {best_b}')
    return smallest_error

print(smallest_error(possible_ms, possible_bs, datapoints))

I got an output: err is 5.0, m is 0.4, b is 1.6

and after changing first 2 rows to

possible_ms = [x * 0.1 for x in range(-100, 101)]
possible_bs = [x * 0.1 for x in range(-200, 201)]

The output was: err is 4.999999999999999, m is 0.30000000000000004, b is 1.7000000000000002

Why is it like that if in both cases I have floats?

Changing x / 10 to x * 0.1 helped, but I expected same outputs

Alex Kis
  • 1
  • 1
  • 2
    It's because 0.1 (like most decimal values) cannot be represented exactly in binary. It's an infinitely repeating decimal, so what the registers contain is just a close approximation. That's the trade off for the flexibility of floating point numbers. Those of us who work with base 10 have the same problem with 1/3. – Tim Roberts Nov 16 '22 at 20:46

0 Answers0