For some reason, a myth has emerged that the authors of the Standard used the phrase "Undefined Behavior" to describe actions which earlier descriptions of the language by its inventor characterized as "machine dependent" was to allow compilers to infer that various things wouldn't happen. While it is true that the Standard doesn't require that implementations process such actions meaningfully even on platforms where there would be a natural "machine-dependent" behavior, the Standard also doesn't require that any implementation be capable of processing any useful programs meaningfully; an implementation could be conforming without being able to meaningfully process anything other than a single contrived and useless program. That's not a twisting of the Standard's intention: "While a deficient implementation could probably contrive
a program that meets this requirement, yet still succeed in being useless, the C89 Committee felt that such ingenuity would probably require more work than making something useful."
In discussing the decision to make short unsigned values promote to signed int, the authors of the Standard observed that most current implementations used quiet wraparound integer-overflow semantics, and having values promote to signed int would not adversely affect behavior if the value was used in overflow scenarios where the upper bits wouldn't matter.
From a practical perspective, guaranteeing clean wraparound semantics costs a little more than would allowing integer computations to behave as though performed on larger types at unspecified times. Even in the absence of "optimization", even straightforward code generation for an expression like long1 = int1*int2+long2;
would on many platforms benefit from being able to use the result of a 16x16->32 or 32x32->64 multiply instruction directly, rather than having to sign-extend the lower half of the result. Further, allowing a compiler to evaluate x+1
as a type larger than x at its convenience would allow it to replace x+1 > y
with x >= y
--generally a useful and safe optimization.
Compilers like gcc go further, however. Even though the authors of the Standard observed that in the evaluation of something like:
unsigned mul(unsigned short x, unsigned short y) { return x*y; }
the Standard's decision to promote x
and y
to signed int
wouldn't adversely affect behavior compared with using unsigned
("Both schemes give the same answer in the vast majority of cases, and both give the same effective result in even more cases in implementations with two’s-complement arithmetic and quiet wraparound on signed overflow—that is, in most current implementations."), gcc will sometimes use the above function to infer within calling code that x
cannot possibly exceed INT_MAX/y
. I've seen no evidence that the authors of the Standard anticipated such behavior, much less intended to encourage it. While the authors of gcc claim any code that would invoke overflow in such cases is "broken", I don't think the authors of the Standard would agree, since in discussing conformance, they note: "The goal is to give the programmer a fighting chance to make powerful C programs that are also highly portable, without seeming to demean perfectly useful C programs that happen not to be portable, thus the adverb strictly."
Because the authors of the Standard failed to forbid the authors of gcc from processing code nonsensically in case of integer overflow, even on quiet-wraparound platforms, they insist that they should jump the rails in such cases. No compiler writer who was trying to win paying customers would take such an attitude, but the authors of the Standard failed to realize that compiler writers might value cleverness over customer satisfaction.