0

I'm writing a script in Python that returns a list of terms in the Fibonacci sequence, given a start term and an end term. For example, if I entered "0" as the start term and "6" as the end term, then the output should be:

[0, 1, 1, 2, 3, 5, 8]

Oddly, when I ran the program, the output was:

[0.0, 1.0, 1.0, 2.0, 3.0000000000000004, 5.000000000000001, 8.000000000000002]

To calculate the terms of the sequence, I used Binet's formula, which I entered as ((1 + math.sqrt(5))**x -(1 - math.sqrt(5))**x) / (2**x * math.sqrt(5))). I entered the same formula into a few other calculators to see if they'd give me decimal answers, and none of them did. Did I somehow mistype the formula, or is Python miscalculating it?

  • 1
    It is due to how Python displays floating point numbers. Python uses a different display method to calculators in that Python often displays more decimal points than calculators do. All floating point calculations are inherently inaccurate. – Tony Suffolk 66 May 06 '21 at 20:27
  • 1
    Does this answer your question? [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Hamms May 06 '21 at 20:29

1 Answers1

1

Floating point numbers lose some precision due to storage issues. See "Floating Point Arithmetic: Issues and Limitations" for details.

In this simple case you can just use round() to get integers again. But beware: this can also lead to errors.

print([
    round(
        ((1 + math.sqrt(5))**x - (1 - math.sqrt(5))**x)
        / (2**x * math.sqrt(5))
    )
    for x in range(10)
])

results in

[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
d-k-bo
  • 513
  • 5
  • 12