In the pre-existing "language" (family of dialects) the C Standard was written to describe, implementations would generally either process signed integer overflow by doing whatever the underlying platform did, truncating values to the length of the underlying type (which is what most platforms did) even on platforms which would otherwise do something else, or triggering some form of signal or diagnostic.
In K&R's book "The C Programming Language", the behavior is described as "machine-dependent".
Although the authors of the Standard have said in the published Rationale document identified some cases where they expected that implementations for commonplace platforms would behave in commonplace fashion, they didn't want to say that certain actions would have defined behavior on some platforms but not others. Further, characterizing the behavior as "implementation-defined" would have created a problem. Consider something like:
int f1(void);
int f2(int a, int b, int c);
int test(int x, int y)
{
int test = x*y;
if (f1())
f2(test, x, y);
}
If the behavior of integer overflow were "Implementation Defined", then any implementation where it could raise a signal or have other observable side effects would be required to perform the multiplication before calling f1(), even though the result of the multiply would be ignored unless f1() returns a non-zero value. Classifying it as "Undefined Behavior" avoids such issues.
Unfortunately, gcc interprets the classification as "Undefined Behavior" as an invitation to treat integer overflow in ways that aren't bound by ordinary laws of causality. Given a function like:
unsigned mul_mod_32768(unsigned short x, unsigned short y)
{
return (x*y) & 0x7FFFu;
}
an attempt to call it with x
greater than INT_MAX/y
may arbitrarily disrupt the behavior of surrounding code, even if the result of the function would not otherwise have been used in any observable fashion.