In Python, 1e400
evaluates to inf
, but 10**400
prints out just fine. Theoretically, 1e(x) is supposed to be 10**x
, but why and when does this break down?
In a related vein, 1e5 == 10**5
evaluates to True
, while 1e40 == 10**40
evaluates to False
. Then while int(1e22)
shows a 1 with 22 zeros, int(1e23)
shows 99999999999999991611392
. Meanwhile, 10**100000
is still printing out just fine. (Although ten to the million is freezing up my computer, it still isn't giving me an overflow error.)
What's behind this inaccuracy? Is 1e()
inferior to 10**()
?