When the Standard was written, most forms of "undefined behavior" recognized the fact that implementations intended for different platforms and purposes would behave in different ways--some useful and predictable, and others not, and recognized that "the marketplace" [as the C Rationale document described it] would be better placed than the Committee to judge which implementations should be expected to behave in what fashions.
Historically, on most platforms, it would cost nothing for compilers to behave as though automatic objects were initialized in some arbitrary fashion that would be too consistent to be useful as any kind of random number generator, but not reliably predictable enough to be used for any other purpose except in cases where any possible value would be as good as any other [e.g. because it's often easier to copy an object without regard for whether it holds a useful value may be cheaper than to avoid copying it if it doesn't hold one]. On some platforms, however, the only way for a compiler to ensure such behavior would be for it to explicitly initialize such objects itself. The authors of the Standard didn't want to require that compilers for such platforms initialize objects with dummy values that would likely get overwritten by the programmer, and opted instead to require that programmers whose code had to be compatible with such platforms would have to ensure that nothing got used without initialization.
Since then, however, things have evolved into a worst-of-both-worlds direction. The authors of the Standard made no effort to mandate that implementations should guarantee that that automatic objects would behave as though initialized with arbitrary values when doing so would offer some benefit at generally-zero cost, because they saw no reason to expect implementations to do anything else in such situations. Today, however, some compilers will use the fact that an action as Undefined Behavior as justification for assuming that no program will ever receive inputs that would result in that action. Because such assumptions aren't usually very useful, they usually have no effect on program behavior. The notion that all UB results in nonsense behavior, which you seem to be alluding to, stems from the fact that an implementation that uses UB to infer that things won't happen, when they actually do, is prone to generate completely nonsensical code in such situations. For example, an aggressive optimizer might see something like:
void test(int x)
{
int y,z;
if (x == 23)
y=z;
printf("%d\n",x);
}
and infer that it would be "impossible" for the function to be invoked with any value of x
other than 23, and therefore the printf should be replaced with puts("23");
. I don't think any compilers are quite that aggressive yet, but it seems fashionable to view generation of code that would be capable of outputting other values of x
as a "missed optimization".