Using C++, I'm trying to cast a float
value to an int
using these instructions :
#include <iostream>
int main() {
float NbrToCast = 1.8f;
int TmpNbr = NbrToCast * 10;
std::cout << TmpNbr << "\n";
}
I understand the value 1.8
cannot be precisely represented as a float
and is actually stored as 1.79999995
.
Thus, I would expect that multiplying this value by ten, would result to 17.99999995
and then casting it to an int
would give 17
.
When compiling and running this code with MinGW
(v4.9.2 32bits) on Windows 7, I get the expected result (17
).
When compiling and running this code with CLang
(v600.0.57) on my Mac (OS X 10.11), I get 18
as a result, which is not what I was expecting but which seems more correct in a mathematical way !
Why do I get this difference ?
Is there a way to have a consistent behavior no matter the OS or the compiler ?