I worked mostly with integers before, and in situations where I need to truncate a float or double to an integer, I would use the following before:
(int) someValue
except until I found out the following:
NSLog(@"%i", (int) ((1.2 - 1) * 10)); // prints 1
NSLog(@"%i", (int) ((1.2f - 1) * 10)); // prints 2
(please see Strange behavior when casting a float to int in C# for the explanation).
The short question is: how should we truncate a float or double to an integer properly? (Truncation is wanted in this case, not "rounding"). Or, we may say that since one number is 1.9999999999999 and the other is 2.00000000000001 (roughly speaking), the truncate is actually done correctly. So the question is, how should we convert a float or double so that the result is a "truncated" number that makes common usage sense?
(the intention is not to use round
, because in this case, for 1.8
, we do want the result of 1
, instead of 2
)
Longer question:
I used
int truncateToInteger(double a) {
return (int) (a + 0.000000000001);
}
-(void) someTest {
NSLog(@"%i", truncateToInteger((1.2 - 1) * 10));
NSLog(@"%i", truncateToInteger((1.2f - 1) * 10));
}
and both print out as 2
, but it seems too much of a hack, and what small number should we use to "remove the inaccuracy"? Is there a more standard or studied way, instead of such an arbitrary hack?
(Note that we want truncation, not rounding in some usage, for example, say, if the number of seconds is 90 or 118, when we show how many minutes and how many seconds have elapsed, the minute should display as 1
, but should not be rounded up to 2
)