While reviewing someone's code, I have encountered a situation similar to this following one where the error (which is basically some poor programming practice) is not quite directly visible. Depending on the compiler used, i/i++
might be either 0
or 1
.
int foo(int n) {
printf("Foo is %d\n", n);
return (0);
}
int bar(int n) {
printf("Bar is %d\n", n);
return (0);
}
int main(int argc, char *argv[]) {
int x = 0;
int(*(my_array[3]))();
int i = 1;
int y = i/++i;
printf("\ni/++i = %d, ", y);
my_array[1] = foo;
my_array[2] = bar;
(my_array[++x])(++x);
}
Therefore, the output is either Foo is 2
, or Bar is 2
.
My questions may be considered too broad, but I want to know:
- Why is this happening / why is this allowed by the compiler? (I checked on several compilers and there was no warning at all)
- How can we correct this kind of strange behavior? (For instance, the project I was working for was huge; what will happen if worse things like heap exploitation / bss overflows or inconsistent synchronization are allowed by the compiler as well? One does not simply sleep well at night after realizing this.)
- I realize there are dozens of coding style books out there on the market, but how will another programmer decide which one of the outputs is the best? (supposing there is no expected output -
Foo is 2
andBar is 2
don't mean anything to the programmer working with the code)