I have a very simple function to convert temperature from ˚C TO ˚K.
func convertKelvinToCelsius(temp:Double) ->Double {
return temp - 273.15
}
And I have a unit test to drive this function. This is where the problem is:
func testKelvinToCelsius(){
var check1 = conv.convertKelvinToCelsius(200.00) // -73.149999999999977
var check2 = 200.00 - 273.15 // -73.149999999999977
var check3 = Double(-73.15) // -73.150000000000006
//Passes
XCTAssert(conv.convertKelvinToCelsius(200.00).description == Double(-73.15).description, "Shoud convert from celsius kelvin")
//Fails
XCTAssert(conv.convertKelvinToCelsius(200.00) == Double(-73.15), "Shoud convert from celsius kelvin")
}
When you add a breakpoint and check the values of check1, check2 and check3, they are very interesting:
check1 Double -73.149999999999977
check2 Double -73.149999999999977
check3 Double -73.150000000000006
Questions:
Why does Swift return different values for check1/check2 and check3
How can I get the second test to pass, because writing it like I did the test1 smells. Why should I have to convert Doubles to Strings to be able to compare them?
Finally, when I
println
check1, check2 and check3, they all print to be '-73.15'. Why? Why not print accurately, and not confuse the programmers!?
To Reproduce:
Just type 200 - 273.15 == -73.15
in you playground and watch it go false
!!