5

I just ran into a situation in Objective-C where:

NSLog(@"%i", (int) (0.2 * 10));         // prints 2
NSLog(@"%i", (int) ((1.2 - 1) * 10));   // prints 1

so I wonder, if the value is a float or double, and we want an integer, should we never just use (int) to do the casting, but use (int) round(someValue)? Or, to flip the question around, when should we just use (int), but in those situations, can't (int) round(someValue) can also do the job, so we should almost always use (int) round(someValue)?

nonopolarity
  • 146,324
  • 131
  • 460
  • 740
  • 1
    It obviously depends on what you want. Sometimes you need to round first, sometimes not. – Daniel Fischer Jun 27 '12 at 14:58
  • 1
    the question is, WHEN do we not need round first? I can never afford to have a 2.0 to cast to a 1, if the 2.0 comes from (1.2 - 1) * 10 – nonopolarity Jun 27 '12 at 14:59
  • 2
    @動靜能量 If you can *never* afford to have 2.0 cast to a 1, then *never* cast. It's pretty straightforward. – benzado Jun 27 '12 at 15:04
  • But what if it's actually `(1.199999999 - 1)*10`? There's no rule for all cases. You have to decide case-by-case. – Daniel Fischer Jun 27 '12 at 15:05
  • Casting to int (by theory), merely truncates the decimal point and anything behind it. So what in the world is going on in this case? – Highrule Jun 27 '12 at 15:06
  • @benzado never cast? But I want an integer... or display it as an integer... is it true that `printf` or `stringWithFormat`'s `%.0f` always uses `round` to do it? – nonopolarity Jun 27 '12 at 15:07
  • @動靜能量 Cast the output of `round` or `roundf`. – benzado Jun 27 '12 at 15:07
  • @benzado But it's assumed that casting as an int already rounds down (via truncating). Is this not the case in objective-c? – Highrule Jun 27 '12 at 15:10
  • @Daniel do you mean what if it is `(1.199999999 - 1)*10` and we want the floor (that is, we want 1)? Then in this case, isn't it better to clearly state that using the `floor` function? I really don't like 2.0 casted to 2 and (1.2 - 1) * 10 casted to 1 – nonopolarity Jun 27 '12 at 15:10
  • @Highrule it is with floating point precision where (1.2 - 1) * 10 comes out as 1.9999999999999999999 – nonopolarity Jun 27 '12 at 15:14
  • 4
    I meant what if your `1.2` isn't really `1.2` but something slightly smaller? Should that come out as 2 or as 1? Note that with standard `double`s, `1.2 - 1` is actually `0.1999999999999999555910790149937383830547332763671875` while with `float`s, it's `0.2000000476837158203125`. – Daniel Fischer Jun 27 '12 at 15:14
  • Floating Point Rounding seems to be the culprit as everyone is mentioning: http://stackoverflow.com/questions/8911440/strange-behavior-when-casting-a-float-to-int-in-c-sharp – Highrule Jun 27 '12 at 15:21
  • In general, casting a float/double to an int (with or without rounding) should be very rare, so you can easily take the time to carefully consider the implications of each case. (You do, of course, need to first understand the approximate nature of IEEE floating-point numbers.) – Hot Licks Jun 27 '12 at 15:21
  • @Daniel so I think it is a tough one when we want the floor and double or float will give 1 and 2 respectively (then what is a solution in this situation?). If we want it rounded, then `(int)round(someValue)` should do it no matter it is float or double? – nonopolarity Jun 27 '12 at 15:26
  • 1
    Floating point numbers prove the existence of by negation, as they are clearly the work of . – Bob Jarvis - Слава Україні Jun 27 '12 at 16:46

3 Answers3

3

The issue here is not converting floating-point values to integer but the rounding errors that occur with floating-point numbers. When you use floating-point, you should understand it well.

The common implementation of float and double use IEEE 754 binary floating-point values. These values are represented as a significand multiplied by a power of two multiplied by a sign (+1 or -1). For floats, the significand is a 24-bit binary numeral, with one bit before the “decimal point” (or “binary point” if you prefer). E.g., the number 1.25 has a significand of 1.01000000000000000000000. For doubles, the significand is a 53-bit binary numeral, with one bit before the point.

Because of this, the values of the decimal numerals .1 and 1.1 cannot be exactly represented. They must be approximated. When you write “.1” or “1.1” in source code, the compiler will convert it to a double that is very near the actual value. Sometimes that result will be slightly greater than the actual value. Sometimes it will be slightly lower than the actual value.

When you convert float to int, the result is (by definition) only the integer portion of the value (the value is truncated toward zero). So, if your value was slightly greater than a positive integer, you get the integer. If your value was slightly less than a positive integer, you get the next lower integer.

If you expect that exact mathematics would give you an integer results, and the floating-point operations you are performing are so few and so simple that the errors have not accumulated to much, then you can round the floating-point value to an integer, using the round function. In simple situations, you can also round by adding .5 before truncation, as by writing “(int) (f + .5)”.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • if we want the floor of a float or double, I wonder if there is a practice that does `(int) (f + 0.000000001)` or some other small number, so that a 2.0 won't be truncated to a 2 sometimes and a 1 sometimes. – nonopolarity Jun 27 '12 at 16:13
  • 2.0 is never truncated to 1. The only time “(int) f” evaluates to 1 is when 1 <= f < 2. The situation you are thinking of is when f appears to be 2.0 because you have displayed it with limited precision, so the display process rounded it. If, in fact, you have calculated an f that is very near 2 but is less than 2, and you want to get 2 as a result, then one solution is to add a small value before casting. Since rounding errors differ in each algorithm that uses floating-point, the value to add, or other process to get a correct result, may differ, and there is no one-size-fits-all solution. – Eric Postpischil Jun 27 '12 at 16:27
  • Aha, it was for `NSLog(@"%i", (int) ((1.2 - 1) * 10));` and `NSLog(@"%i", (int) ((1.2f - 1) * 10));` where the first line shows a 1 and the second line shows a 2 – nonopolarity Jun 28 '12 at 00:30
1

It depends on what you want. Obviously, a straight cast to int will be faster than a call to round, whereas round will give out more accurate values.

Unless you are doing code that relies upon speed to be effective (in which case, floating point values might not be the best to use, either), I would say that it's worth it to call round. Even if it only changes something you display on-screen by one pixel, when dealing with certain things (angle measure, colors, etc.) The more accuracy you can have the better.

EDIT: Simple test to back up my claim of casting being faster than rounding:

Tested on Macbook Pro:

  • 2.8 GHz Intel Core 2 Duo
  • Mac OS 10.7.4
  • Apple LLVM 3.1 Compiler
  • -O0 (no optimization)

Code:

int value;
void test_cast()
{
    clock_t start = clock();
    value = 0;
    for (int i = 0; i < 1000 * 1000; i++)
    {
        value += (int) (((i / 1000.0) - 1.0) * 10.0);
    }

    printf("test_cast: %lu\n", clock() - start);
}

void test_round()
{
    clock_t start = clock();
    value = 0;
    for (int i = 0; i < 1000 * 1000; i++)
    {
        value += round(((i / 1000.0) - 1.0) * 10.0);
    }

    printf("test_round: %lu\n", clock() - start);
}

int main()
{
    test_cast();
    test_round();
}

Results:

test_cast: 11895
test_round: 14353

Note: I know that clock() isn't the best profiling function, but it does show that round() at least uses more CPU cycles.

Richard J. Ross III
  • 55,009
  • 24
  • 135
  • 201
0

Conversion to int rounds a float or double value towards zero.

1.1 -> 1, 1.99999999999 -> 1, 2.0 -> 2

-1.1 -> -1, -1.999999999 -> -1, -2 -> -2

Is that what you want? If yes, that's what you should do. If not, what do you want?

Floating-point arithmetic always gives rounding errors. So for example 0.2 * 10 will give a number that is close to 2. Might be a little bit less, or a little bit more, or by pure chance it might be exactly 2. Therefore (int) (0.2 * 10) might be 1 or 2, because "a little bit less than 2" will be converted to 1.

round (x) will round to the nearest integer. Again, if you calculate round (1.4 + 0.1), the sum of 1.4 and 0.1 is some number very close to 1.5, maybe a bit less, maybe a bit more, so you don't know if it gets rounded to 1.0 or 2.0.

Would you want all numbers from 1.5 to 2.5 to be rounded to 2? Use (int) round (x). You might get a slightly different result if x is 1.5 or 2.5. Maybe you want numbers up to 1.9999 to be rounded down, but 1.99999999 rounded to 2. Use double, not float, and calculate (int) (x + 0.000001).

gnasher729
  • 51,477
  • 5
  • 75
  • 98