0

The largest number in python is supposed to be:

l=2**(1023)*(2-2**(-52))

1.7976931348623157e+308

This can be verified with the instruction:

sys.float_info.max

1.7976931348623157e+308

However, see the following

1.0000000000000000000000000001*l

1.7976931348623157e+308

and now:

1.00006*l

inf

What is going on? For which x happened that (1+x-ε) = 1.7976931348623157e+308 and (1+x) = inf?

Update:

I believe the largest number that triggers infinity in python is between

sys.float_info.max + 0.5*epsilon and sys.float_info.max + 0.51*epsilon

with epsilon = $2^{-52}$ being the epsilon of the computer.

See this:

l = sys_float_info.max
(1+0.5*epsilon)*l

1.7976931348623157e+308

(1+0.51*epsilon)*l

inf

mkrieger1
  • 19,194
  • 5
  • 54
  • 65
  • 1
    So you're talking about the `float` type? Is this just an artefact of the implementation of floating point numbers? Do the answers to this [question](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) help at all? – quamrana Jul 30 '22 at 20:08
  • "What is going on?" I don't understand the question. *What do you think should happen instead, and why?* "For which $x$ happened that $(1+x-\epsilon) l = 1.7976931348623157e+308 $ and $(1+ x ) = inf $" You can determine it experimentally, but I don't see why this information should be useful. Also, Python's *integer* type allows you to create arbitrarily large, finite quantities (subject to the ability to represent them in memory). – Karl Knechtel Jul 30 '22 at 20:12
  • Does https://stackoverflow.com/questions/588004/is-floating-point-math-broken answer your question? Otherwise I am not sure what exactly we are supposed to tell you. – Karl Knechtel Jul 30 '22 at 20:13
  • @qumrana : Yes "floating point". No to the question 0.1+0.2 not equal to 0.3. That is due to truncation to (52 bits in a 64 bit number) of an infinite periodic representation of 0.1 or 0.2 in base 2. That is another issue. Thanks – Herman Jaramillo Jul 30 '22 at 20:25
  • @mkrieger : sys.float_info.max is supposed to be the largest number. If you add 1 to it you seem to get exactly the same number (save them to variables and use == for comparison). If you add 1000 you get the same number. The funny thing is that after you add some big number you jump to "inf". What is that big number? – Herman Jaramillo Jul 30 '22 at 20:28
  • You didn't add 1 or 1000, you multiplied with 1 or something slightly larger than 1. – mkrieger1 Jul 30 '22 at 20:31
  • @KarlKnechtel : The reason that 0.1+0.2 is not equal to 0.3 is because, for example, 0.1=0.0 0011 0011 0011 0011..... up to infinity. A 52 bits mantissa (for the IEEE 64 bits) will truncate this number. This has nothing to do with my question. At some point you go from a finite to inf. Please review my "update". I wonder where is that? – Herman Jaramillo Jul 30 '22 at 20:32

1 Answers1

5

In the first case you're actually multiplying by exactly 1:

>>> 1.0000000000000000000000000001
1.0

Binary floating-point is not WYSIWYG - it's subject to rounding back to machine precision at all stages.

Continuing the above:

>>> 1.0000000000000001
1.0
>>> 1.000000000000001
1.000000000000001

So 1.000000000000001 is the smallest literal of this form that doesn't round to exactly 1.0. And then:

>>> sys.float_info.max * 1.000000000000001
inf

Note that it's enough to multiply by the smallest representable float greater than 1, which is a little smaller than that literal's rounded value:

>>> import math
>>> math.nextafter(1.0, 100) * sys.float_info.max
inf
>>> 1.000000000000001 > math.nextafter(1.0, 100)
True

What about addition?

While examples show multiplication, the title of the question asks about addition. So let's do that too. Learn about nextafter() and ulp(), which are the right tools with which to approach questions like this. First, an easy computational way to find the largest finite representable float:

>>> import math
>>> big = math.nextafter(math.inf, 0)
>>> big
1.7976931348623157e+308

Now set lastbit to the value of its least-significant bit:

>>> lastbit = math.ulp(big)
>>> lastbit
1.99584030953472e+292

Certainly if we add that to big it will overflow to infinity. And so it does:

>>> big + lastbit
inf

However, adding even half that will also overflow, due to "to nearest/even" rounding resolving the halfway tie "up":

>>> big + lastbit / 2.0
inf

Will anything smaller work? No. Anything smaller will just be thrown away by rounding "down". Here we try adding the next smallest representable float, and see that it has no effect:

>>> big + math.nextafter(lastbit / 2.0, 0)
1.7976931348623157e+308
Tim Peters
  • 67,464
  • 13
  • 126
  • 132