0

Can someone tell me why this happening?

let formatter = NumberFormatter.init()
formatter.numberStyle = .decimal
formatter.usesGroupingSeparator = false
formatter.roundingMode = .floor
formatter.minimumFractionDigits = 2
formatter.maximumFractionDigits = 2

let v = 36
let scale = 10


let float = formatter.string(from: NSNumber(value: Float(v) / Float(scale)))!
let double = formatter.string(from: NSNumber(value: Double(v) / Double(scale)))!

print(float)    // 3.59
print(double)   // 3.60

When I use Float the result is 3.59 (wrong result in my opinion) and when I use Double the result is 3.60.

I know it is something related to .floor roundingMode, but i don't fully understand the reason.

Giorgio
  • 1,973
  • 4
  • 36
  • 51
  • 2
    The float value is 3.59999... and the double value is 3.600000... and you round down towards negative infinity hence the result. – Joakim Danielson Mar 09 '22 at 15:34
  • 1
    Related: https://stackoverflow.com/q/588004/1187415. Mandatory reading: http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html – Martin R Mar 09 '22 at 15:35
  • You can try it e.g. here: https://www.exploringbinary.com/floating-point-converter/. The float value is 3.599999904632568359375 and the double value is 3.600000000000000088817841970012523233890533447265625. – Martin R Mar 09 '22 at 15:40
  • Thank you @MartinR. So, in your opinion, what is the best way to format my division and obtains the value 3.60. Use `Double` or remove/change `roundingMode` – Giorgio Mar 09 '22 at 15:44
  • @LeoDabus Can you elaborate your comment in a response? In this way I can understand better and I can upvote it – Giorgio Mar 09 '22 at 15:46
  • ```let v = 36 let scale = 10 let number = Decimal(v) / Decimal(scale)``` @LeoDabus Is this not a right way to do? – Giorgio Mar 09 '22 at 16:04
  • @LeoDabus I don't understand what you mean. I have two integers `v` and `scale`...What is the proper initializer for them? `Decimal(sign: .plus, exponent: 1, significand: v)` and `Decimal(sign: .plus, exponent: 1, significand: scale)`? – Giorgio Mar 09 '22 at 16:59

1 Answers1

1

If you would like to preserve your fraction digits precision it is better to use Swift native Decimal type. That's what it is. You can use the Decimal init(sign: FloatingPointSign, exponent: Int, significand: Decimal) initializer and use your scale exponent and your value significand. Just make sure to negate its value:

extension SignedInteger {
    var negated: Self { self * -1 }
}

let v = 36
let scale = 10
let sign: FloatingPointSign = v >= 0 ? .plus : .minus
let exponent = Decimal(scale).exponent.negated
let significand = Decimal(v).significand
let decimal = Decimal.init(sign: sign, exponent: exponent, significand: significand)
let formatted = formatter.string(for: decimal)   // "3.60"
Leo Dabus
  • 229,809
  • 59
  • 489
  • 571
  • Thank you! I think you have to check also `scale` to choose `sign`. Something like `let sign: FloatingPointSign = ((v >= 0 && scale >= 0) || (v < 0 && scale < 0)) ? .plus : .minus` – Giorgio Mar 09 '22 at 17:43
  • 1
    @Giorgio Note also that if your scale is not power of 10 you will need to keep using your original approach and that's fine. The important thing here is to use Decimal – Leo Dabus Mar 09 '22 at 17:44