Calling ToString()
on imprecise floats produces numbers a human would expect (eg. 4.999999... gets printed as 5
). However, when casting to int, the fractional part gets dropped and the result is an integer decremented by 1.
float num = 5f;
num *= 0.01f;
num *= 100f;
num.ToString().Dump(); // produces "5", as expected
((int)num).ToString().Dump(); // produces "4"
How do I cast the float to int, so that I get the human friendly value, that float.ToString()
produces?
I'm using this dirty hack:
int ToInt(float value) {
return (int)(value + Math.Sign(value) * 0.00001);
}
..but surely there must be a more elegant solution.
Edit: I'm well aware of the reasons why floats are truncated the way they are (4.999... to 4, etc.) The question is about casting to int
while emulating the default behavior of System.Single.ToString()
To better illustrate the behavior I'm looking for:
-4.99f should be cast to -4
4.01f should be cast to 4
4.99f should be cast to 4
4.999999f should be cast to 4
4.9999993f should be cast to 5
This is the exact same behavior that ToString
produces.
Try running this:
float almost5 = 4.9999993f;
Console.WriteLine(almost5); // "5"
Console.WriteLine((int)almost5); // "4"