3
let decimalA: Decimal = 3.24
let decimalB: Double = 3.24
let decimalC: Decimal = 3.0 + 0.2 + 0.04
print (decimalA)    // Prints 3.240000000000000512
print (decimalB)    // Prints 3.24
print (decimalC)    // Prints 3.24

I'm totally confused. Why do these things happen? I know why floating point numbers lose precision, but I can't understand why Decimal lose precision while storing decimal numbers.

I want to know how can I initialize Decimal type without losing precision. The reason why these happen is also very helpful to me. Sorry for my poor English.

ST K
  • 69
  • 6
  • 1
    I wanted to post an answer explaining why `let decimalC: Decimal = 3.0 + 0.2 + 0.04` works, but since the question is closed I'll just say it here. The loss of precision when 3, 0.2 and 0.04 are represented as a `Double` is a lot smaller than when 3.24 is represented as a `Double`, so `Decimal` is able to guess correctly that you mean 3, 0.2, 0.04, and not 0.200000000000000011102230246252 or 0.0400000000000000008326672684689. – Sweeper Apr 16 '21 at 09:47

1 Answers1

4

The problem is that all floating point literals are inferred to have type Double, which results in a loss of precision. Unfortunately Swift can't initialise floating point literals to Decimal directly.

If you want to keep precision, you need to initialise Decimal from a String literal rather than a floating point literal.

let decimalA = Decimal(string: "3.24")!
let double = 3.24
let decimalC: Decimal = 3.0 + 0.2 + 0.04
print(decimalA) // Prints 3.24
print(double) // Prints 3.24
print(decimalC) // Prints 3.24

Bear in mind this issue only happens with floating point literals, so if your floating point numbers are generated/parsed in runtime (such as reading from a file or parsing JSON), you shouldn't face the precision loss issue.

Dávid Pásztor
  • 51,403
  • 9
  • 85
  • 116