-3

I was trying some things in Xcode, and ran into an unexplained situation where Xcode gives me a wrong result for a simple division:

let a : Double = 0.235
let b : Double = 0.001
let nrOfDivisions = a / b

print("Divisions: ", nrOfDivisions) //prints 234.99999999999997

Strange enough, if I divide from 0.230 ... 0.234 to the same amount of 0.001, I get correct results, but starting from 0.235 ... 0.239 I get these wrong results. I've tested now with 0.225, 0.226, 0.227, 0.245, 0.246, 0.247 and they all divide correctly.

What might be the issue here? It is a bug in Xcode, or am I missing something?

Cœur
  • 37,241
  • 25
  • 195
  • 267
Starsky
  • 1,829
  • 18
  • 25
  • 2
    What is wrong about the result, this is quite normal for floating point types? – Joakim Danielson Aug 07 '19 at 15:32
  • 3
    [What Every Computer Scientist Should Know About Floating-Point Arithmetic](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – Rob Aug 07 '19 at 15:38

1 Answers1

0

Well this is probably due to this issue: Why not use Double or Float to represent currency?. Were you thinking that Apple implemented floating point wrong? In the Java World, these questions came up quite often, and BigDecimal was the solution, you can read about that.

Rob
  • 11,446
  • 7
  • 39
  • 57
  • 1
    Thanks for explaining. I read somewhere that this happens with Floats, and not with Doubles, because Floats have 7 precision decimals, while the Doubles have 15 precision decimals. Thus, making Floats less precise. Also, in my observation it doesn't occur with the other numbers I tested, only with that small range I described, although my experiment didn't include lots of trials. – Starsky Aug 07 '19 at 16:16
  • 1
    @Starsky This happens under one circumstance or another, with *every* fixed-precision data type, whether `Float` (32 bits), `Double` (64 bits), `Float80` (80 bits) and beyond. It's no different than asking the computer to: "Give me an exact representation of 1/3 in decimal, using only `n` digits". For any `n < ∞`, the answer will necessarily and inescapably be imprecise. E.g. if you only had 2 digits to work with, the closest approximation is `0.33`. But you're off the mark by `0.003333333...` – Alexander Aug 07 '19 at 16:20