I came across this unusual bug while working on some bitwise exercises. When the output of pow()
was coerced to an unsigned int
, the result of pow()
called with a variable as the exponent became zero, while the result when the exponent was a literal integer was coerced normally to 0xFFFFFFFF (2^32 - 1). This only happens when the value is excessively large, in this case 2^32. The type of the variable used as the exponent argument does not seem to affect this result. I also tried storing the output of the both calls to pow()
as doubles, and then applying the coercion when referencing the variables; the disparity persisted.
#import <math.h>
int main (void) {
int thirtytwo = 32; // double, unsigned, etc... all yielded the same result
printf("Raw Doubles Equal: %s\n", pow(2, 32) == pow(2, thirtytwo) ? "true" : "false"); // -> true
printf("Coerced to Unsigned Equal: %s\n", (unsigned) pow(2, 32) == (unsigned) pow(2, thirtytwo) ? "true": "false"); // -> false
return 0;
}
Out of curiosity, I ran the same code through clang/llvm, and obtained a different result: regardless of whether the exponent was a variable, coercing the result to an unsigned int yielded zero (as expected).
Edit: The maximum 32-bit unsigned integer is 2^32 - 1
, so neither coerced output is actually correct. My mistake was overflowing the integer size limit. Why the gcc essentially rounded down to the maximum integer value is an interesting curiosity, but not of particular importance.