16
let dic : [Double : Double] = [1.1 : 2.3, 2.3 : 1.1, 1.2 : 2.3]

print(dic)// [2.2999999999999998: 1.1000000000000001, 1.2: 2.2999999999999998, 1.1000000000000001: 2.2999999999999998]



let double : Double = 2.3
let anotherdouble : Double = 1.1

print(double) // 2.3
print(anotherdouble) // 1.1

I don't get that why is the compiler printing values from dictionaries differently? I'm on Swift 3, Xcode 8. Is this a bug or some weird way of optimizing stuff or something?

EDIT

What's even more weird is that :

Some values go over, some go below, some stay as they are! 1.1 is less than 1.1000000000000001 while 2.3 is more than 2.2999999999999998, 1.2 is just 1.2

mfaani
  • 33,269
  • 19
  • 164
  • 293
  • 2
    Doesn't look like a bug exactly, but it is a bit inconsistent – harold Dec 04 '16 at 13:54
  • 1
    FWIW, the exact values of 1.1 and 2.3 are 1.100000000000000088817841970012523233890533447265625 and 2.29999999999999982236431605997495353221893310546875, explaining the direction and magnitude of the deviation – harold Dec 04 '16 at 14:03
  • @harold what do you mean their exact value is that. I'm not following. Where/how did you come up with those numbers? – mfaani Dec 04 '16 at 14:04
  • 1
    This is the nature of floating point numbers. Find more details [in this answer](http://stackoverflow.com/a/40526353/1457385). – shallowThought Dec 04 '16 at 14:05
  • And 1.2 is 1.1999999999999999555910790149937383830547332763671875, which rounds up to 1.2 if printed with a precision that includes all those 9's but not the first 5 – harold Dec 04 '16 at 14:08
  • @shallowThought I'm reading into it, interesting but that doesn't explain why a normal print of the number is printed different from it being printing within a dictionary or does it? – mfaani Dec 04 '16 at 14:08
  • 2
    Also, you probably shouldn't use a dictionary with doubles as keys anyway. – harold Dec 04 '16 at 14:19
  • @harold As keys, not at values.hmmm something to keep in mind thanks – mfaani Dec 04 '16 at 14:20

2 Answers2

18

As already mentioned in the comments, a Double cannot store the value 1.1 exactly. Swift uses (like many other languages) binary floating point numbers according to the IEEE 754 standard.

The closest number to 1.1 that can be represented as a Double is

1.100000000000000088817841970012523233890533447265625

and the closest number to 2.3 that can be represented as a Double is

2.29999999999999982236431605997495353221893310546875

Printing that number means that it is converted to a string with a decimal representation again, and that is done with different precision, depending on how you print the number.

From the source code at HashedCollections.swift.gyb one can see that the description method of Dictionary uses debugPrint() for both keys and values, and debugPrint(x) prints the value of x.debugDescription (if x conforms to CustomDebugStringConvertible).

On the other hand, print(x) calls x.description if x conforms to CustomStringConvertible.

So what you see is the different output of description and debugDescription of Double:

print(1.1.description) // 1.1
print(1.1.debugDescription) // 1.1000000000000001

From the Swift source code one can see that both use the swift_floatingPointToString() function in Stubs.cpp, with the Debug parameter set to false and true, respectively. This parameter controls the precision of the number to string conversion:

int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
  Precision = std::numeric_limits<T>::max_digits10;
}

For the meaning of those constants, see std::numeric_limits:

  • digits10 – number of decimal digits that can be represented without change,
  • max_digits10 – number of decimal digits necessary to differentiate all values of this type.

So description creates a string with less decimal digits. That string can be converted to a Double and back to a string giving the same result. debugDescription creates a string with more decimal digits, so that any two different floating point values will produce a different output.

Martin R
  • 529,903
  • 94
  • 1,240
  • 1,382
  • Interesting. So I just tried `let dic : [Double : Double] = [1.1 : 2.3, 1.1000000000000001 : 2.3]` and I got error **fatal error: Dictionary literal contains duplicate keys** ... while unsurprisingly `let dic : [Double : Double] = [1.1 : 2.3, 1.10000000000000031 : 2.3]` doesn't create an error. Also not sure if I would fully understand this in one bite. Time will help:) – mfaani Dec 04 '16 at 14:15
  • @Honey: Yes, both values are converted to the same 64-bit binary floating point number when stored in a Double. Try `let x = 1.1; let y = 1.1000000000000001; print(x == y)`. – Martin R Dec 04 '16 at 14:19
  • so you mean every double always has a representation in a 16 decimal precision format? – mfaani Dec 04 '16 at 14:22
  • @Honey: Sorry, I don't get what you mean. `Float` and `Double` use the IEEE 754 standard, see e.g. https://en.wikipedia.org/wiki/IEEE_floating_point. This thread is very instructive: http://stackoverflow.com/questions/588004/is-floating-point-math-broken. – Martin R Dec 04 '16 at 14:27
  • I mean there are 16 digits after `.` ie they are represented in such a format – mfaani Dec 04 '16 at 15:57
  • @Honey there are not, though that's a common choice for printing. They're not stored in decimal in the first place, see the linked articles. – harold Dec 04 '16 at 16:05
2

Yes, Swift uses binary floating numbers while storing it into dictionary

Use dictionary as [Double: Any], use Float if your number is 32 bit then upcast to AnyObject

See below example

    let strDecimalNumber  = "8.37"    
    var myDictionary : [String: Any] = [:] 
    myDictionary["key1"] = Float(strDecimalNumber) as AnyObject  // 8.369999999999999
    myDictionary["key2"] = Double(strDecimalNumber) as AnyObject  //8.369999999999999
    myDictionary["key3"] = Double(8.37) as AnyObject   //8.369999999999999
    myDictionary["key4"] = Float(8.37) as AnyObject  //8.37
    myDictionary["key5"] = 8.37  // 8.3699999999999992
    myDictionary["key6"] = strDecimalNumber  // "8.37" it is String
    myDictionary["key7"] = strDecimalNumber.description  // "8.37" it is String
    myDictionary["key8"] = Float(10000000.01)  // 10000000.0
    myDictionary["key9"] = Float(100000000.01) // 100000000.0
    myDictionary["key10"] = Float(1000000000.01) // 1e+09 
    myDictionary["key11"] = Double(1000000000.01) // 1000000000.01
    print(myDictionary)

myDictionary will be printed as

["key1": 8.37 , "key2": 8.369999999999999, "key3": 8.369999999999999, "key4": 8.37, "key5": 8.3699999999999992, "key6": "8.37", "key7": "8.37" , "key8": 10000000.0, "key9": 100000000.0, "key10": 1e+09 ,"key11": 1000000000.01]

As mentioned by Martin R in above answer using .description will be treated as String not actual Float

SML
  • 2,172
  • 4
  • 30
  • 46