Numbers are stored in finite memory. Whether or not you do floating point arithmetic, you need a way to encode a decimal number in binary memory. So long as you have finite memory (i.e. always, in the real world), you have to choose to spend your bits on having high range, or high precision, or some trade off of the two.
Going above 7 digits gets you into the first "area" of trade-off of Float
. You can "do it", but there's a trade off: at this high magnitude, you lose some precision. In this case, it's that whole numbers are rounded to the closest 10.
Float
is a single-precision IEEE 754 floating pointer number. Its first "trade off" area is at 16,777,217
. From 0
to 16,777,216
, every whole number is precisely representable. After that, there isn't enough precision to specify a number down to 2^0
(ones, a.k.a. units). The next best thing is to represent it correctly down to the closest 2^1
(twos).
Check this out:
import Foundation
for originalInt in 16_777_210 ... 16_777_227 {
let interMediateFloat = Float(originalInt)
let backAsInt = Int(interMediateFloat)
print("\(originalInt) -> \(backAsInt)")
}
print("\n...\n")
for originalInt in 33_554_430 ... 33_554_443 {
let interMediateFloat = Float(originalInt)
let backAsInt = Int(interMediateFloat)
print("\(originalInt) -> \(backAsInt)")
}
prints:
16777210 -> 16777210
16777211 -> 16777211
16777212 -> 16777212
16777213 -> 16777213
16777214 -> 16777214
16777215 -> 16777215
16777216 -> 16777216 // Last precisely representable whole number
16777217 -> 16777216 // rounds down
16777218 -> 16777218 // starts skipping by 2s
16777219 -> 16777220
16777220 -> 16777220
16777221 -> 16777220
16777222 -> 16777222
16777223 -> 16777224
16777224 -> 16777224
16777225 -> 16777224
16777226 -> 16777226
16777227 -> 16777228
...
33554430 -> 33554430
33554431 -> 33554432
33554432 -> 33554432
33554433 -> 33554432
33554434 -> 33554432 // Last whole number representable to the closest "two"
33554435 -> 33554436 // rounds up
33554436 -> 33554436
33554437 -> 33554436 // starts skipping by 4s
33554438 -> 33554440
33554439 -> 33554440
33554440 -> 33554440
33554441 -> 33554440
33554442 -> 33554440
33554443 -> 33554444
And so on. As the magnitude gets larger, whole numbers get represented with less and less precision. At the extreme, the largest whole number value (340,282,346,638,528,859,811,704,183,484,516,925,440
) and the second largest whole number value (340,282,326,356,119,256,160,033,759,537,265,639,424
) differ by 20,282,409,603,651,670,423,947,251,286,016
(2^104
).
The ability to express such a high numbers comes precisely at the cost of the inability to precisely store many numbers around that magnitude. Rounding happens. Alternatively, binary integers like Swift's Int
have perfect precision for whole numbers (always stored down to the correct ones/units), but pay for that with a vastly smaller max size (only 2,147,483,647
for a signed Int32
).