1

Here is my code

let ns = NumberFormatter.init()
ns.allowsFloats = true
ns.maximumFractionDigits = 18 //This is a variable value
ns.minimumFractionDigits = 18 //This is a variable value
ns.roundingMode = .floor
ns.numberStyle = .decimal
let doubleValueOfDecimal : Double = 12.95699999999998
let numb = NSNumber.init(value: doubleValueOfDecimal)
print(numb)
let string = ns.string(from: numb)
print(string)

The following is the output and input doubleValueOfDecimal = 2.95699999999998 Output 2.95699999999998 Optional("2.956999999999980000") But if I input

doubleValueOfDecimal =  12.95699999999998

The output is

12.95699999999998
Optional("12.957000000000000000")

The string conversion rounds up the value. Can someone explain me how this works?

The string conversion is rounding up the decimal places when I want it to show the exact number.

AjinkyaSharma
  • 1,870
  • 1
  • 16
  • 26
  • Can you edit your question and add the declaration, and inferred type if it's not explicit, of `doubleValueOfDecimal`? – CRD Nov 12 '18 at 08:10
  • Possible duplicate of [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Gereon Nov 12 '18 at 09:31

2 Answers2

1

You are falling down the cracks between the expectations of the behaviour of decimal numbers and the reality that Float and Double are binary floating-point, that is the fractional part of decimal numbers are sums of 1/10's, 1/100's etc. while for binary numbers it is sums of 1/2's, 1/4's etc. and some values exact in one are inexact in the other and vice-versa.

Change your code to include:

let doubleValueOfDecimal : Decimal = Decimal(string:"12.95699999999998")!
let numb = doubleValueOfDecimal as NSDecimalNumber

and the output is probably what you expect:

12.95699999999998
12.956999999999980000

The Decimal type is a decimal floating-point value type, NSDecimalNumber is a subclass of NSNumber which holds a Decimal value.

HTH

(Note: you have to initialise the Decimal from a string as using a numeric literal appears to involve the Swift compiler using binary floating point at some point in the process...)

CRD
  • 52,522
  • 5
  • 70
  • 86
0

Use wrapper method of NSNumber class. `

 print(numb.stringValue)
let ns = NumberFormatter.init()
ns.allowsFloats = true
ns.maximumFractionDigits = 18
ns.minimumFractionDigits = 18
ns.roundingMode = .floor
ns.numberStyle = .decimal
let numb = NSNumber.init(value: doubleValueOfDecimal)
print(numb)
let string = ns.string(from: numb)
print(numb.stringValue)

Below Output for 2.95699999999998 , 12.95699999999998 values.

Output

2.95699999999998

2.95699999999998

12.95699999999998

12.95699999999998

Pratik Sodha
  • 3,679
  • 2
  • 19
  • 38
  • I have used NumberFormatter class for a reason. The maximumFractionDigits and minimumFractionDigits are variables and not constants. So I can't ditch the whole setup for Formatter and directly use the stringValue – AjinkyaSharma Nov 12 '18 at 07:45